id
stringlengths
1
6
url
stringlengths
16
1.82k
content
stringlengths
37
9.64M
189300
https://textbooks.math.gatech.edu/ila/parametric-form.html
Parametric Form Skip to main content PDF version Interactive Linear Algebra Dan Margalit, Joseph Rabinoff Front Matter Colophon Contributors to this textbook Variants of this textbook Overview 1 Systems of Linear Equations: Algebra Systems of Linear Equations Row Reduction Parametric Form 2 Systems of Linear Equations: Geometry Vectors Vector Equations and Spans Matrix Equations Solution Sets Linear Independence Subspaces Basis and Dimension Bases as Coordinate Systems The Rank Theorem 3 Linear Transformations and Matrix Algebra Matrix Transformations One-to-one and Onto Transformations Linear Transformations Matrix Multiplication Matrix Inverses The Invertible Matrix Theorem 4 Determinants Determinants: Definition Cofactor Expansions Determinants and Volumes 5 Eigenvalues and Eigenvectors Eigenvalues and Eigenvectors The Characteristic Polynomial Similarity Diagonalization Complex Eigenvalues Stochastic Matrices 6 Orthogonality Dot Products and Orthogonality Orthogonal Complements Orthogonal Projection Orthogonal Sets The Method of Least Squares Back Matter Complex Numbers Notation Hints and Solutions to Selected Exercises GNU Free Documentation License Index Colophon Index PrevUpNext PrevUpNext Section 1.3 Parametric Form¶ permalink Objectives Learn to express the solution set of a system of linear equations in parametric form. Understand the three possibilities for the number of solutions of a system of linear equations. Recipe: parametric form. Vocabulary word: free variable. Subsection 1.3.1 Free Variables There is one possibility for the row reduced form of a matrix that we did not see in Section 1.2. Example(A System with a Free Variable) Consider the linear system We solve it using row reduction: This row reduced matrix corresponds to the linear system In what sense is the system solved? We rewrite as For any value of there is exactly one value of and that make the equations true. But we are free to choose any value of We have found all solutions: it is the set of all values where This is called the parametric form for the solution to the linear system. The variable is called a free variable.   Figure 2 A picture of the solution set (the yellow line) of the linear system in this example. There is a unique solution for every value of move the slider to change Given the parametric form for the solution to a linear system, we can obtain specific solutions by replacing the free variables with any specific real numbers. For instance, setting in the last example gives the solution and setting gives the solution Definition Consider a consistent system of equations in the variables Let be a row echelon form of the augmented matrix for this system. We say that is a free variable if its corresponding column in is not a pivot column. In the above example, the variable was free because the reduced row echelon form matrix was In the matrix the free variables are and (The augmented column is not free because it does not correspond to a variable.) Recipe: Parametric form The parametric form of the solution set of a consistent system of linear equations is obtained as follows. Write the system as an augmented matrix. Row reduce to reduced row echelon form. Write the corresponding (solved) system of linear equations. Move all free variables to the right hand side of the equations. Moving the free variables to the right hand side of the equations amounts to solving for the non-free variables (the ones that come pivot columns) in terms of the free variables. One can think of the free variables as being independent variables, and the non-free variables being dependent. Implicit Versus Parameterized Equations The solution set of the system of linear equations is a line in as we saw in this example. These equations are called the implicit equations for the line: the line is defined implicitly as the simultaneous solutions to those two equations. The parametric form can be written as follows: This called a parameterized equation for the same line. It is an expression that produces all points of the line in terms of one parameter, One should think of a system of equations as being an implicit equation for its solution set, and of the parametric form as being the parameterized equation for the same set. The parameteric form is much more explicit: it gives a concrete recipe for producing all solutions. You can choose any value for the free variables in a (consistent) linear system. Free variables come from the columns without pivots in a matrix in row echelon form. ##### Example ##### Example(A Parameterized Plane) Subsection 1.3.2 Number of Solutions There are three possibilities for the reduced row echelon form of the augmented matrix of a linear system. The last column is a pivot column. In this case, the system is inconsistent. There are zero solutions, i.e., the solution set is empty. For example, the matrix comes from a linear system with no solutions. Every column except the last column is a pivot column. In this case, the system has a unique solution. For example, the matrix tells us that the unique solution is The last column is not a pivot column, and some other column is not a pivot column either. In this case, the system has infinitely many solutions, corresponding to the infinitely many possible values of the free variable(s). For example, in the system corresponding to the matrix any values for and yield a solution to the system of equations. Comments, corrections or suggestions?(Free GitHub account required)
189301
https://chem.libretexts.org/Bookshelves/Organic_Chemistry/Map%3A_Organic_Chemistry_(Wade)_Complete_and_Semesters_I_and_II/Map%3A_Organic_Chemistry_(Wade)/17%3A_Aromatic_Compounds/17.02%3A_The_Structure_and_Properties_of_Benzene_and_its_Derivatives
Skip to main content 17.2: The Structure and Properties of Benzene and its Derivatives Last updated : May 30, 2020 Save as PDF 17.1: Introduction- The Discovery of Benzene 17.3: Resonance and the Molecular Orbitals of Benzene Page ID : 45554 ( \newcommand{\kernel}{\mathrm{null}\,}) Benzene Benzene, C6H6, is the simplest member of a large family of hydrocarbons, called aromatic hydrocarbons. These compounds contain ring structures and exhibit bonding that must be described using the resonance hybrid concept of valence bond theory or the delocalization concept of molecular orbital theory. (To review these concepts, refer to the earlier chapters on chemical bonding). The resonance structures for benzene, C6H6, are: There are many derivatives of benzene. The hydrogen atoms can be replaced by many different substituents. Aromatic compounds more readily undergo substitution reactions than addition reactions; replacement of one of the hydrogen atoms with another substituent will leave the delocalized double bonds intact. The following are typical examples of substituted benzene derivatives: Toluene and xylene are important solvents and raw materials in the chemical industry. Styrene is used to produce the polymer polystyrene. 17.1: Introduction- The Discovery of Benzene 17.3: Resonance and the Molecular Orbitals of Benzene
189302
https://electronics.stackexchange.com/questions/233102/si-derived-units-another-name-for-the-volt
voltage - SI derived units: Another name for the Volt - Electrical Engineering Stack Exchange Join Electrical Engineering By clicking “Sign up”, you agree to our terms of service and acknowledge you have read our privacy policy. Sign up with Google OR Email Password Sign up Already have an account? Log in Skip to main content Stack Exchange Network Stack Exchange network consists of 183 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. Visit Stack Exchange Loading… Tour Start here for a quick overview of the site Help Center Detailed answers to any questions you might have Meta Discuss the workings and policies of this site About Us Learn more about Stack Overflow the company, and our products current community Electrical Engineering helpchat Electrical Engineering Meta your communities Sign up or log in to customize your list. more stack exchange communities company blog Log in Sign up Home Questions Unanswered AI Assist Labs Tags Chat Users Teams Ask questions, find answers and collaborate at work with Stack Overflow for Teams. Try Teams for freeExplore Teams 3. Teams 4. Ask questions, find answers and collaborate at work with Stack Overflow for Teams. Explore Teams Teams Q&A for work Connect and share knowledge within a single location that is structured and easy to search. Learn more about Teams Hang on, you can't upvote just yet. You'll need to complete a few actions and gain 15 reputation points before being able to upvote. Upvoting indicates when questions and answers are useful. What's reputation and how do I get it? Instead, you can save this post to reference later. Save this post for later Not now Thanks for your vote! You now have 5 free votes weekly. Free votes count toward the total vote score does not give reputation to the author Continue to help good content that is interesting, well-researched, and useful, rise to the top! To gain full voting privileges, earn reputation. Got it!Go to help center to learn more SI derived units: Another name for the Volt [closed] Ask Question Asked 9 years, 4 months ago Modified9 years, 4 months ago Viewed 3k times This question shows research effort; it is useful and clear 2 Save this question. Show activity on this post. Closed. This question needs details or clarity. It is not currently accepting answers. Want to improve this question? As written, this question is lacking some of the information it needs to be answered. If the author adds details in comments, consider editing them into the question. Once there's sufficient detail to answer, vote to reopen the question. Closed 9 years ago. Improve this question V=K g⋅m 2 s 2 A V=K g⋅m 2 s 2 A I know it's basic but I was having a hard time understanding the relationship between volts, ohms, watts, amps and ohms until I broke everything down into base units. The exercise got me thinking about alternate names for the volt. For example, a joule can also be called a newton-meter or a coulomb-volt. The powers in SI-derived units indicate relationships based on rates of change. For example: m m - distance m/s m/s- velocity, change in distance per second m/s 2 m/s 2 - acceleration, change in velocity per second Consider: K g⋅m 2 K g⋅m 2 which measures moments of inertia. That makes volts the acceleration in moments of inertia per amp. That isn't the most helpful observation considering there isn't anything rotating. Maybe consider: K g s 2 A K g s 2 A which measures Teslas of magnetic strength. That makes volts... Teslas of area? Are there any helpful unit-based alternate names for the volt? voltage math physics units Share Share a link to this question Copy linkCC BY-SA 3.0 Cite Follow Follow this question to receive notifications edited May 9, 2016 at 4:54 Dave Tweed 184k 17 17 gold badges 248 248 silver badges 431 431 bronze badges asked May 9, 2016 at 3:53 futurebirdfuturebird 672 6 6 silver badges 17 17 bronze badges 1 2 Just a note: SI units named after a person are lower case when spelled out and capitalised when abbreviated. V = volt, A = ampere, T = tesla, K = kelvin. You have used Kg = kelvin-grams in your question.Transistor –Transistor 2016-05-09 07:35:05 +00:00 Commented May 9, 2016 at 7:35 Add a comment| 3 Answers 3 Sorted by: Reset to default This answer is useful 3 Save this answer. Show activity on this post. a Volt is a Joule per Coulomb. so each Coulomb of charge has V Joules of energy packed in them somehow. an Ampere is a Coulomb per second. so in one second, I coulombs of charge pass by. so if I coulombs of charge pass by a boundary, each packing V Joules of energy, that means VI joules of energy are being transferred (or used) each second. that is the rate of energy usage or "power". so a Watt is one Joule per Second. if VI joules of energy are transferred each second, that's a power of VI watts. an Ohm is a Volt per Ampere. for each ampere of current stuffed into R ohms of resistance, will result in R volts. so for I amperes of current stuffed into R ohms, the resulting voltage across the resistances will be IR volts. Share Share a link to this answer Copy linkCC BY-SA 3.0 Cite Follow Follow this answer to receive notifications answered May 9, 2016 at 5:36 robert bristow-johnsonrobert bristow-johnson 1,705 1 1 gold badge 13 13 silver badges 30 30 bronze badges Add a comment| This answer is useful 2 Save this answer. Show activity on this post. Wikipedia's definition of the volt differs from yours. V=potential energy charge=N⋅m coulomb=kg⋅m⋅m s 2⋅A⋅s=kg⋅m 2 A⋅s 3 V=potential energy charge=N⋅m coulomb=kg⋅m⋅m s 2⋅A⋅s=kg⋅m 2 A⋅s 3 Note the s 3 s 3. Otherwise I can't help. Share Share a link to this answer Copy linkCC BY-SA 3.0 Cite Follow Follow this answer to receive notifications answered May 9, 2016 at 9:40 TransistorTransistor 187k 16 16 gold badges 207 207 silver badges 433 433 bronze badges Add a comment| This answer is useful 0 Save this answer. Show activity on this post. Different names for any unit can be useful, depending on what discipline you are active in, what aspects of the system you want to highlight. For instance, when designing transformers, the saturation flux of a given core is often rated in volt.seconds, which flips around to computing the output voltage per turn as v = d(total_flux)/dt. In the battery bank for a car, volts could be considered as the energy per ampere.hour of battery (to within a scale factor of 3600!) Hydrologists, when considering water supply and flooding capacity, would often quote reservoir capacities in acre.feet, so 1 foot of rain on 10 acres of ground ... you get the drift. Share Share a link to this answer Copy linkCC BY-SA 3.0 Cite Follow Follow this answer to receive notifications answered May 9, 2016 at 7:20 Neil_UKNeil_UK 183k 3 3 gold badges 204 204 silver badges 454 454 bronze badges Add a comment| Start asking to get answers Find the answer to your question by asking. Ask question Explore related questions voltage math physics units See similar questions with these tags. The Overflow Blog The history and future of software development (part 1) Getting Backstage in front of a shifting dev experience Featured on Meta Spevacus has joined us as a Community Manager Introducing a new proactive anti-spam measure Report this ad Related 23.7 Volt external battery to charge a another 3.7 Volt Phone Battery? 0Find the peak load of a power plant for the given load equation 1Capacitor Units For Power Factor Correction 0How is the solution for α α and β β current gain of a BJT derived? 1What are the units in which a radio signal's strength is measured? 0What are the units when solving Transmission Line iterativelly? 0Units of PID for car' velocity? 0In semiconductor physics, how can we know which units to use for Boltzmann's constant? Hot Network Questions ICC in Hague not prosecuting an individual brought before them in a questionable manner? What happens if you miss cruise ship deadline at private island? Alternatives to Test-Driven Grading in an LLM world Storing a session token in localstorage Is there a way to defend from Spot kick? Checking model assumptions at cluster level vs global level? Cannot build the font table of Miama via nfssfont.tex Discussing strategy reduces winning chances of everyone! Direct train from Rotterdam to Lille Europe How can the problem of a warlock with two spell slots be solved? Copy command with cs names Can a cleric gain the intended benefit from the Extra Spell feat? Is direct sum of finite spectra cancellative? Numbers Interpreted in Smallest Valid Base Why do universities push for high impact journal publications? "Unexpected"-type comic story. Aboard a space ark/colony ship. Everyone's a vampire/werewolf Does the Mishna or Gemara ever explicitly mention the second day of Shavuot? Is encrypting the login keyring necessary if you have full disk encryption? Origin of Australian slang exclamation "struth" meaning greatly surprised The geologic realities of a massive well out at Sea Explain answers to Scientific American crossword clues "Éclair filling" and "Sneaky Coward" Who is the target audience of Netanyahu's speech at the United Nations? How long would it take for me to get all the items in Bongo Cat? A time-travel short fiction where a graphologist falls in love with a girl for having read letters she has not yet written… to another man Why are you flagging this comment? It contains harassment, bigotry or abuse. This comment attacks a person or group. Learn more in our Code of Conduct. It's unfriendly or unkind. This comment is rude or condescending. Learn more in our Code of Conduct. Not needed. This comment is not relevant to the post. Enter at least 6 characters Something else. A problem not listed above. Try to be as specific as possible. Enter at least 6 characters Flag comment Cancel You have 0 flags left today Electrical Engineering Tour Help Chat Contact Feedback Company Stack Overflow Teams Advertising Talent About Press Legal Privacy Policy Terms of Service Your Privacy Choices Cookie Policy Stack Exchange Network Technology Culture & recreation Life & arts Science Professional Business API Data Blog Facebook Twitter LinkedIn Instagram Site design / logo © 2025 Stack Exchange Inc; user contributions licensed under CC BY-SA. rev 2025.9.26.34547 By clicking “Accept all cookies”, you agree Stack Exchange can store cookies on your device and disclose information in accordance with our Cookie Policy. Accept all cookies Necessary cookies only Customize settings Cookie Consent Preference Center When you visit any of our websites, it may store or retrieve information on your browser, mostly in the form of cookies. This information might be about you, your preferences, or your device and is mostly used to make the site work as you expect it to. The information does not usually directly identify you, but it can give you a more personalized experience. Because we respect your right to privacy, you can choose not to allow some types of cookies. Click on the different category headings to find out more and manage your preferences. Please note, blocking some types of cookies may impact your experience of the site and the services we are able to offer. Cookie Policy Accept all cookies Manage Consent Preferences Strictly Necessary Cookies Always Active These cookies are necessary for the website to function and cannot be switched off in our systems. They are usually only set in response to actions made by you which amount to a request for services, such as setting your privacy preferences, logging in or filling in forms. You can set your browser to block or alert you about these cookies, but some parts of the site will not then work. These cookies do not store any personally identifiable information. Cookies Details‎ Performance Cookies [x] Performance Cookies These cookies allow us to count visits and traffic sources so we can measure and improve the performance of our site. They help us to know which pages are the most and least popular and see how visitors move around the site. All information these cookies collect is aggregated and therefore anonymous. If you do not allow these cookies we will not know when you have visited our site, and will not be able to monitor its performance. Cookies Details‎ Functional Cookies [x] Functional Cookies These cookies enable the website to provide enhanced functionality and personalisation. They may be set by us or by third party providers whose services we have added to our pages. If you do not allow these cookies then some or all of these services may not function properly. Cookies Details‎ Targeting Cookies [x] Targeting Cookies These cookies are used to make advertising messages more relevant to you and may be set through our site by us or by our advertising partners. They may be used to build a profile of your interests and show you relevant advertising on our site or on other sites. They do not store directly personal information, but are based on uniquely identifying your browser and internet device. Cookies Details‎ Cookie List Clear [x] checkbox label label Apply Cancel Consent Leg.Interest [x] checkbox label label [x] checkbox label label [x] checkbox label label Necessary cookies only Confirm my choices
189303
https://www.chegg.com/homework-help/questions-and-answers/prove-16-b-c-d-1-1-b-1-c-1-d-positive-numbers-b-c-d-using-cauchy-schwarz-inequality-q136602739
Solved Prove that 16<=(a + b + c + d)(1/a +1/b +1/c | Chegg.com Skip to main content Books Rent/Buy Read Return Sell Study Tasks Homework help Understand a topic Writing & citations Tools Expert Q&A Math Solver Citations Plagiarism checker Grammar checker Expert proofreading Career For educators Help Sign in Paste Copy Cut Options Upload Image Math Mode ÷ ≤ ≥ o π ∞ ∩ ∪           √  ∫              Math Math Geometry Physics Greek Alphabet Math Algebra Algebra questions and answers Prove that 16<=(a + b + c + d)(1/a +1/b +1/c +1/d) for all positive numbers a, b, c, d using the Cauchy-Schwarz inequality Your solution’s ready to go! Our expert help has broken down your problem into an easy-to-learn solution you can count on. See Answer See Answer See Answer done loading Question: Prove that 16<=(a + b + c + d)(1/a +1/b +1/c +1/d) for all positive numbers a, b, c, d using the Cauchy-Schwarz inequality Prove that 1 6<=(a + b + c + d)(1/a +1/b +1/c +1/d) for all positive numbers a, b, c, d using the Cauchy-Schwarz inequality There are 2 steps to solve this one.Solution 100%(2 ratings) Share Share Share done loading Copy link Step 1 View the full answer Step 2 UnlockAnswer Unlock Previous questionNext question Not the question you’re looking for? Post any question and get expert help quickly. Start learning Chegg Products & Services Chegg Study Help Citation Generator Grammar Checker Math Solver Mobile Apps Plagiarism Checker Chegg Perks Company Company About Chegg Chegg For Good Advertise with us Investor Relations Jobs Join Our Affiliate Program Media Center Chegg Network Chegg Network Busuu Citation Machine EasyBib Mathway Customer Service Customer Service Give Us Feedback Customer Service Manage Subscription Educators Educators Academic Integrity Honor Shield Institute of Digital Learning © 2003-2025 Chegg Inc. All rights reserved. Cookie NoticeYour Privacy ChoicesDo Not Sell My Personal InformationGeneral PoliciesPrivacy PolicyHonor CodeIP Rights Do Not Sell My Personal Information When you visit our website, we store cookies on your browser to collect information. The information collected might relate to you, your preferences or your device, and is mostly used to make the site work as you expect it to and to provide a more personalized web experience. However, you can choose not to allow certain types of cookies, which may impact your experience of the site and the services we are able to offer. Click on the different category headings to find out more and change our default settings according to your preference. You cannot opt-out of our First Party Strictly Necessary Cookies as they are deployed in order to ensure the proper functioning of our website (such as prompting the cookie banner and remembering your settings, to log into your account, to redirect you when you log out, etc.). For more information about the First and Third Party Cookies used please follow this link. More information Allow All Manage Consent Preferences Strictly Necessary Cookies Always Active These cookies are necessary for the website to function and cannot be switched off in our systems. They are usually only set in response to actions made by you which amount to a request for services, such as setting your privacy preferences, logging in or filling in forms. You can set your browser to block or alert you about these cookies, but some parts of the site will not then work. These cookies do not store any personally identifiable information. Functional Cookies [x] Functional Cookies These cookies enable the website to provide enhanced functionality and personalisation. They may be set by us or by third party providers whose services we have added to our pages. If you do not allow these cookies then some or all of these services may not function properly. Performance Cookies [x] Performance Cookies These cookies allow us to count visits and traffic sources so we can measure and improve the performance of our site. They help us to know which pages are the most and least popular and see how visitors move around the site. All information these cookies collect is aggregated and therefore anonymous. If you do not allow these cookies we will not know when you have visited our site, and will not be able to monitor its performance. Sale of Personal Data [x] Sale of Personal Data Under the California Consumer Privacy Act, you have the right to opt-out of the sale of your personal information to third parties. These cookies collect information for analytics and to personalize your experience with targeted ads. You may exercise your right to opt out of the sale of personal information by using this toggle switch. If you opt out we will not be able to offer you personalised ads and will not hand over your personal information to any third parties. Additionally, you may contact our legal department for further clarification about your rights as a California consumer by using this Exercise My Rights link. If you have enabled privacy controls on your browser (such as a plugin), we have to take that as a valid request to opt-out. Therefore we would not be able to track your activity through the web. This may affect our ability to personalize ads according to your preferences. Targeting Cookies [x] Switch Label label These cookies may be set through our site by our advertising partners. They may be used by those companies to build a profile of your interests and show you relevant adverts on other sites. They do not store directly personal information, but are based on uniquely identifying your browser and internet device. If you do not allow these cookies, you will experience less targeted advertising. Cookie List Clear [x] checkbox label label Apply Cancel Consent Leg.Interest [x] checkbox label label [x] checkbox label label [x] checkbox label label Reject All Confirm My Choices Image 7%C2%A0(1%2Fa%20%2B%C2%A01%2Fb%20%2B%C2%A01%2Fc%20%7C%20Chegg.com&p=https%3A%2F%2Fwww.chegg.com%2Fhomework-help%2Fquestions-and-answers%2Fprove-16-b-c-d-1-1-b-1-c-1-d-positive-numbers-b-c-d-using-cauchy-schwarz-inequality-q136602739&r=&lt=3487&evt=pageLoad&sv=2&cdb=AQET&rn=927182) mmmmmmmmmmlli mmmmmmmmmmlli mmmmmmmmmmlli
189304
https://journal.firsttuesday.us/economic-principles-of-appraisal-part-i/70257/
Economic Principles in Appraisal, Part I | firsttuesday Journal Courses Forms & Marketing Download 400+ RPI Real Estate Forms 300+ FARM Letters Client Q&A Flyers Market Data All Market Charts Home Sales Data firsttuesday Local DRE Licensee Profile Features Features Feature Articles Your Practice New Laws Recent Case Decisions Legislative Gossip Change the Law Letters to the Editor Tools and Guides Real Estate Dictionary Broker Search Attorney Database Real estate, Explained Property Management MLO Recession Side Hustle Guide Recession Survival Strategies Due-on clause Finder Recruits Market Timeline Housing Crisis Legislation Quilix Newsletter Topics Fundamentals Buyer Representation Economics Laws and Regulations Fair Housing Finance Property Management Investment Mortgages Form Matters Realtipedia Video Select Page Economic Principles in Appraisal, Part I Posted by ft Editorial Staff | Jan 15, 2020 | Feature Articles, Real Estate, Video | 0 The first part of this series introduces several economic concepts that are used in the appraisal of real estate. Stay tuned to Part II next week for the completion of this discussion! Economic theory in appraisals, explained Appraisals are an integral part of the real estate buying and selling process. For many buyers and sellers, it is the final hurdle to be cleared before mortgage-assist financing is obtained and the sale formally closed. An appraisal is an individual’s opinion or estimate of a property’s value on a given date. This estimate is produced in an appraisal report, including data collected and analyzed by the appraiser to support their opinion. Multiple economic principles are used in the appraisal of real estate. The economic principles of appraisal covered in Part I of this series include the principles of: supply and demand; change; conformity; and highest and best use. The principle of supply and demand: For appraisal purposes, the principle of supply and demand holds that once the supply of available homes decreases, the value of homes increase since more people are demanding the decreased supply of available homes. This principle correlates to the density of the population and its level of income. The principle of change: The principle of change holds that property is constantly in a state of change. The change a property experiences is seen in its life-cycle. The life-cycle of a property has four stages: development, stability, decline and old age. Development of the property includes the subdivision of lots, improvements constructed and the start of a neighborhood community. The stability stage of a property, such as a home built within a community, occurs when the property reaches a level of completion where changes are only made to it to maintain an appropriate level of condition. The decline stage starts when the oldest buildings begin to deteriorate, lower social or economic groups move into the community and larger homes are converted into multiple family use. The revitalization or gentrification stage occurs when the neighborhood is recognized as suitable for renewal. This most often occurs in more urban areas where high-costs force younger and first-time buyers to create value through the renewal process. The principle of conformity: The principle of conformity holds that when similarity of improvements is maintained in a neighborhood, the maximum value of a property can be realized on a sale. Zoning regulations and conditions, covenants and restrictions (CC&Rs) tend to protect homeowners by narrowing the uses and excluding nonconforming uses of the property. The principle of conformity is further categorized under the principle of: regression: The principle of regression holds that the value of the best property in a neighborhood will be adversely affected by the value of other properties in the neighborhood. For example, this principle applies to over-improved homes. When an owner makes extensive renovations, such as adding additional rooms and landscaping, and the other neighbors do not, the house is no longer as similar to the others. On the sale of the over-improved home, the owner will not receive the full value of the cost of over-improvements. progression: The principle of progression is the opposite of the principle of regression, holding that a smaller and lesser maintained property in a well-kept neighborhood will sell for more than if the home were in an area of comparable properties. The principle of highest and best use: The principle of highest and best use holds that the greatest value of the property is realized when its use is maximized. The test for highest and best use requires that the use be physically possible, legally permissible, economically feasible, and achieve the maximum productivity (memorized by the acronym PLEM). The economic principles of appraisal to be illustrated in Part II include the principles of: consistent-use; balance; contribution; substitution; anticipation; and competition. Related article: //journal.firsttuesday.us/70201-2/70201/ Share: Previous Does a trust deed’s insufficient legal description of a property render the trust deed void? Next Amazon dips a toe in real estate with online homebuying program About The Author ft Editorial Staff is the production staff comprised of legal editor Fred Crane, writer-editors Connor P. Wallmark, Carrie B. Reyes, Amy Platero, Robin Jennings, Lily Hart, consulting instructor Summer Goralik, graphic designer Mary LaRochelle, video instructor Bill Mansfield and video editors John Rojas, Quinn Stevenson and Jose Melendez Avila. Related Posts Self-help evictions: the new forbidden fruit for California landlords October 6, 2022 New law expedites real estate license application process for military members January 28, 2022 POLL: What do you think is the primary reason causing first-time homebuyers to buy? January 3, 2022 Buyer purchasing power index up in Q4 2017 January 21, 2018 Leave a replyCancel reply Your email address will not be published.Required fields are marked COMMENT Name Email Website Search for: Newsletter Resources Education Latest Video Needed Legislative Follow-Through – Foresight Dictates Prudence Latest posts Home sales volume decline intensifies in August 2025September 26, 2025 Get to know us Products & Services Designed by Elegant Themes | Powered by WordPress
189305
https://www.energy.gov/sites/prod/files/2019/01/f58/Ultimate%20Fast%20Facts%20Guide-PRINT.pdf
NUCLEAR ENERGY THE ULTIMATE FAST FACTS GUIDE TO Nuclear energy has been quietly powering America with clean, carbon-free electricity for the last 60 years. It may not be the first thing you think of when you heat or cool your home, but maybe that’s the point. It’s been so reliable that we sometimes take it for granted. Did you know about a fifth of the country’s electricity comes from nuclear power each year? If not, then it’s about time you get to know nuclear. Here are five fast facts to get you up to speed: 1. NUCLEAR POWER PLANTS PRODUCED 805 BILLION KILOWATT HOURS OF ELECTRICITY IN 2017 The United States is the world’s largest producer of nuclear power. It generated just under 805 billion kilowatt hours of electricity in 2017— enough to power 73 million homes. Commercial nuclear power plants have supplied around 20% of the nation’s electricity each year since 1990. 2. NUCLEAR POWER PROVIDES 56% OF AMERICA’S CLEAN ENERGY Nuclear energy provided 56% of America’s carbon-free electricity in 2017, making it by far the largest domestic source of clean energy. Nuclear power plants do not emit greenhouse gases while generating electricity. They produce power by boiling water to create steam that spins a turbine. The water is heated by a process called fission, which makes heat by splitting apart uranium atoms inside a nuclear reactor core. 3. NUCLEAR ENERGY IS THE MOST RELIABLE ENERGY SOURCE IN AMERICA Nuclear power plants operated at full capacity more than 92% of the time in 2017—making it the most reliable energy source in America. That’s nearly twice as reliable as coal (54%) and natural gas (55%) plants, and 2 to 3 times more reliable than wind (37%) and solar (27%) plants. Nuclear power plants are designed to run 24 hours a day, 7 days a week because they require less maintenance and can operate for longer stretches before refueling (typically every 1.5 or 2 years). 4. NUCLEAR HELPS POWER 30 U.S. STATES As of September 2018, 98 commercial reactors help power homes and businesses in 30 U.S. states. Illinois has 11 reactors—the most of any state—and joins South Carolina and New Hampshire in receiving more than 50% of its power from nuclear. 5. NUCLEAR FUEL IS EXTREMELY DENSE Because of this, the amount of used nuclear fuel is not as big as you think. All of the used nuclear fuel produced by the U.S. nuclear energy industry over the last 60 years could fit on a football field at a depth of less than 10 yards. 3 2 AMPED UP! MAGAZINE GENERAL INFO 5 FAST FACTS ABOUT NUCLEAR ENERGY Source: U.S. Energy Information Administration AMPED UP! MAGAZINE HOW IT WORKS HOW DOES A NUCLEAR REACTOR WORK? 431 utility-scale wind turbines 3.125 million PV panels 4 5 NUCLEAR REACTORS ARE THE HEART OF A NUCLEAR POWER PLANT. They contain and control nuclear chain reactions that produce heat through a physical process called fission. That heat is used to make steam that spins a turbine to create electricity. With more than 450 commercial reactors worldwide, including 98 in the United States, nuclear power continues to be one of the largest sources of reliable carbon-free electricity available. NUCLEAR FISSION CREATES HEAT The main job of a reactor is to house and control nuclear fission—a process where atoms split and release energy. Reactors use uranium for nuclear fuel. The uranium is processed into small ceramic pellets and stacked together into sealed metal tubes called fuel rods. Typically, more than 200 of these rods are bundled together to form a fuel assembly. A reactor core is usually made up of a couple hundred assemblies, depending on the power level. Inside the reactor vessel, the fuel rods are immersed in water, which acts as both a coolant and moderator. The moderator helps slow down the neutrons produced by fission to sustain the chain reaction. Control rods can then be inserted into the reactor core to reduce the reaction rate or withdrawn to increase it. The heat created by fission turns the water into steam, which spins a turbine to produce carbon-free electricity. TYPES OF LIGHT-WATER REACTORS IN THE UNITED STATES All commercial nuclear reactors in the United States are light-water reactors. This means they use normal water as both a coolant and neutron moderator. There are two types of light-water reactors operating in America: PRESSURIZED-WATER NUCLEAR REACTORS More than 65% of the commercial reactors in the United States are pressurized-water reactors or PWRs. These reactors pump water into the reactor core under high pressure to prevent the water from boiling. The water in the core is heated by nuclear fission and then pumped into tubes inside a heat exchanger. Those tubes heat a separate water source to create steam. The steam then turns an electric generator to produce electricity. The core water cycles back to the reactor to be reheated and the process is repeated. BOILING WATER REACTORS Roughly a third of the reactors operating in the United States are boiling water reactors (BWRs). BWRs heat water and produce steam directly inside the reactor vessel. Water is pumped up through the reactor core and heated by fission. Pipes then feed the steam directly to a turbine to produce electricity. The unused steam is then condensed back to water and reused in the heating process. 100 million LED bulbs HOW MUCH POWER DOES A NUCLEAR REACTOR PRODUCE? A typical reactor produces around 1 gigawatt of power or the same amount of power as: To better understand what makes nuclear so reliable, take a look at the graph below. As you can see, nuclear energy has, by far, the highest capacity factor of any other energy source. This basically means nuclear power plants are producing maximum power more than 92% of the time during the year. That’s almost twice as reliable as coal or natural gas units, and 2 to 3 times more reliable than wind and solar plants. WHY ARE NUCLEAR POWER PLANTS MORE RELIABLE? Nuclear power plants are typically used more often because they require less maintenance and are designed to operate for longer stretches before refueling (typically every 1.5 or 2 years). Natural gas and coal capacity factors are generally lower due to routine maintenance and/or refueling at these facilities. Renewable plants are considered intermittent or variable sources and are mostly limited by a lack of fuel (i.e. wind, sun, or water). As a result, these plants need a backup power source such as large-scale storage (not currently available at grid-scale)—or they can be paired with a reliable baseload power like nuclear energy. WHY DOES THIS MATTER? A typical nuclear reactor produces 1 gigawatt (GW) of electricity. That doesn’t mean you can simply replace it with a 1 gigawatt coal or renewable plant. Based on the capacity factors above, you would need almost two coal or nearly three renewable plants (each of 1 GW size) to generate the same amount of electricity onto the grid. NUCLEAR POWER IS THE MOST RELIABLE ENERGY SOURCE AND IT’S NOT EVEN CLOSE Source: U.S. Energy Information Administration 7 6 AMPED UP! MAGAZINE CLEAN AND RELIABLE When you hear the words “clean energy,” what comes to mind? Most people immediately think of solar panels or wind turbines, but how many of you thought of nuclear energy? Nuclear is often left out of the “clean energy” conversation despite it being the second largest source of low-carbon electricity in the world behind hydropower. So, just how clean and sustainable is nuclear? Try these quick facts for starters. NUCLEAR IS A ZERO-EMISSIONS CLEAN ENERGY SOURCE It generates power through fission, which is the process of splitting uranium atoms to produce energy. The heat released by fission is used to create steam that spins a turbine to generate electricity without the harmful byproducts emitted by fossil fuels. According to the Nuclear Energy Institute (NEI), the United States avoided more than 14,000 million metric tons of carbon dioxide emissions between 1995 and 2016. That’s the equivalent of removing 3 billion cars from the road. It also keeps the air clean by removing thousands of tons of harmful air pollutants each year that contribute to acid rain, smog, lung cancer and cardiovascular disease. NUCLEAR ENERGY’S LAND FOOTPRINT IS SMALL Despite producing massive amounts of carbon-free power, nuclear energy produces more electricity on less land than any other clean-air source. A typical 1,000-megawatt nuclear facility in the United States needs a little more than 1 square mile to operate. NEI says wind farms require 360 times more land area to produce the same amount of electricity and solar photovoltaic plants require 75 times more space. To put that in perspective, you would need more than 3 million solar panels to produce the same amount of power as a typical commercial reactor or more than 430 wind turbines (capacity factor not included). NUCLEAR ENERGY PRODUCES MINIMAL WASTE Nuclear fuel is extremely dense. It’s about 1 million times greater than that of other traditional energy sources and because of this, the amount of used nuclear fuel is not as big as you might think. All of the used nuclear fuel produced by the U.S. nuclear energy industry over the last 60 years could fit on a football field at a depth of less than 10 yards! That waste can also be reprocessed and recycled, although the United States does not currently do this. However, some advanced reactor designs being developed could operate on used fuel. NUCLEAR IS CLEAN AND SUSTAINABLE 9 8 AMPED UP! MAGAZINE COMMON MISUNDERSTANDINGS 3 ADVANCED REACTOR SYSTEMS TO WATCH OUT FOR BY 2030 Move over millennials, there’s a new generation looking to debut by 2030. Generation IV nuclear reactors are being developed through an international cooperation of 14 countries—including the United States. The U.S. Department of Energy and its national labs are supporting research and development on a wide range of new advanced reactor technologies that could be a game-changer for the nuclear industry. These innovative systems are expected to be cleaner, safer and more efficient than previous generations. Intrigued? Here are three designs we are currently working on with industry partners to help meet our future energy needs in a cost-competitive way. SODIUM-COOLED FAST REACTOR The sodium-cooled fast reactor (SFR) uses liquid metal (sodium) as a coolant instead of water that is typically used in U.S. commercial power plants. This allows for the coolant to operate at higher temperatures and lower pressures than current reactors—improving the efficiency and safety of the system. The SFR also uses a fast neutron spectrum, meaning that neutrons can cause fission without having to be slowed down first as they are in current reactors. This could allow SFRs to use both fissile material and spent fuel from current reactors to produce electricity. VERY HIGH TEMPERATURE REACTOR The very high temperature reactor is cooled by flowing gas and is designed to operate at high temperatures that can produce electricity extremely efficiently. The high temperature gas could also be used in energy-intensive processes that currently rely on fossil fuels, such as hydrogen production, desalination, district heating, petroleum refining, and ammonia production. Very high temperature reactors offer impressive safety features and can be easy to construct and affordable to maintain. MOLTEN SALT REACTOR Molten salt reactors (MSR) use molten fluoride or chloride salts as a coolant. The coolant can flow over solid fuel like other reactors or fissile materials can be dissolved directly into the primary coolant so that the fission directly heats the salt. MSRs are designed to use less fuel and produce shorter-lived radioactive waste than other reactor types. They have the potential to significantly change the safety posture and economics of nuclear energy production by processing fuel online, removing waste products and adding fresh fuel without lengthy refueling outages. Their operation can be tailored for the efficient burn up of plutonium and minor actinides, which could allow MSRs to consume waste from other reactors. The system can also be used for electricity or hydrogen production. “The Simpsons.” It’s a show we all know and grew to love—unless you actually work with nuclear technology. America’s longest-running animated series on FOX has been making nuclear workers cringe on their couches for almost 3 decades now. And while this show has produced a number of catch phrases that are immortalized in today’s pop culture, its comedic depiction of the fictitious Springfield nuclear power plant—and its negligent safety operator Homer Simpson—is far from “excellent.” Here are four things “The Simpsons” didn’t get quite so right about nuclear energy. 1. CONTROL ROOM OPERATORS DO NOT WORK BY THEMSELVES In several episodes, Homer Simpson is by himself in a control room working on a remote safety console to help manage the reactor. According to the Nuclear Regulatory Commission (NRC), a supervisor, along with a second supervisor or reactor operator must be present at all times during reactor operation. All individuals, either operating or supervising the operation of a U.S. commercial reactor, must also be licensed by the NRC. 2. COMMERCIAL NUCLEAR SPENT FUEL IS NOT A LIQUID The show routinely depicts radioactive waste as a green, oozy liquid that is seeping out of huge drum containers and pipes throughout the facility. In current reactors, nuclear fuel is made up of metal fuel rods that contain small ceramic pellets of enriched uranium oxide. The fuel rods are combined into tall assemblies that are then placed into the reactor. After use, the fuel rods are first moved into steel-lined temporary storage pools that are about 40 feet deep. After at least 3 years of wet storage, they are then sealed inside welded steel-reinforced concrete containers. 3. NUCLEAR WASTE IS SAFELY STORED Radioactive waste is commonly seen around the town of Springfield carelessly dumped into seas, stuffed into trees and put on playgrounds. The process is a little different in real life. Spent fuel is safely and securely stored at more than 100 reactor and storage sites across the country. The fuel is either enclosed in storage pools or dry casks as mentioned above. On-site storage at nuclear power plants is not intended to be permanent. The U.S. Department of Energy is requesting funds to restart its application process for a permanent repository site and to initiate a robust interim storage program. 4. NUCLEAR POWER PLANTS DO NOT CAUSE MUTATIONS Who can forget Blinky—the three-eyed fish or that scary mutated spider? You won’t see these characters because nuclear power plants do not release any pollution into the environment—just water vapor. In fact, your granite counter tops give off more radiation than living next door to a nuclear power plant over the course of a year. NEW TECHNOLOGY 10 11 COMING SOON: ADVANCED SMALL MODULAR REACTORS Welcome to the future of nuclear energy. Within the next decade, advanced small modular reactors (SMRs) could change the way we think about reliable, clean and affordable nuclear power. Instead of going big, scientists and engineers went small developing mini reactors that are roughly a third of the size of a typical nuclear power plant.That means America’s largest clean energy source could be coming to a market near you— making nuclear more scalable and flexible than ever before. Here’s how they work. FOUR KEY BENEFITS TO SMRS Nuclear is getting even smaller...and it’s opening up some big opportunities for the industry. A handful of microreactor designs are under development in the United States and they could be ready to roll out within the next decade.These plug-and-play reactors will be small enough to transport by truck and could help solve energy challenges in a number of areas, ranging from remote commercial or residential locations to military bases. FEATURES Microreactors are not defined by their fuel form or coolant. Instead, they have three main features: 1. Factory fabricated: All components of a microreactor would be fully assembled in a factory and shipped out to location. This eliminates difficulties associated with large-scale construction, reduces capital costs and would help get the reactor up and running quickly. 2. Transportable: Smaller unit designs will make microreactors very transportable. This would make it easy for vendors to ship the entire reactor by truck, shipping vessel, airplane or railcar. 3. Self-regulating: Simple and responsive design concepts will allow microreactors to self-regulate. They won’t require a large number of specialized operators and would utilize passive safety systems that prevent any potential for overheating or reactor meltdown. BENEFITS Microreactor designs vary, but most would be able to produce 1-20 megawatts of thermal energy that could be used directly as heat or converted to electric power. They can be used to generate clean and reliable electricity for commercial use or for non-electric applications such as district heating, water desalination and hydrogen fuel production. Other benefits include: • Seamless integration with renewables within microgrids • Can be used for emergency response to help restore power to areas hit by natural disasters • A longer core life, operating for up to 10 years without refueling • Can be quickly removed from sites and exchanged for new ones. Most designs will require fuel with a higher concentration of uranium-235 that’s not currently used in today’s reactors, although some may benefit from use of high temperature moderating materials that would reduce fuel enrichment requirements while maintaining the small system size. The U.S. Department of Energy supports a variety of advanced reactor designs including gas, liquid metal, molten salt and heat pipe-cooled concepts. American microreactor developers are currently focused on gas and heat pipe-cooled designs that could debut as early as the mid-2020s. 13 12 NUCLEAR MICROREACTORS ACCIDENT TOLERANT FUELS The U.S. Department of Energy is working with industry to quickly develop new fuels with enhanced accident tolerance.These nuclear fuels will not only increase the safety of today’s light-water reactors but also improve plant performance at a crucial time for the U.S. nuclear industry. A number of reactors are currently under economic stress and the benefits that these fuels bring to the table could make the case for utilities to extend plant operations. Here are 5 things you need to know about accident tolerant fuels. 1. ACCIDENT TOLERANT FUELS BEAT THE HEAT AND PERFORM BETTER Nuclear fuel is exposed to harsh conditions inside the reactor core. Accident tolerant fuels use new materials that reduce hydrogen buildup, improve fission product retention and are structurally more resistant to radiation, corrosion, and high temperatures. In short, these fuel concepts will perform better and withstand extreme heat and steam for longer durations than the current fuel system of uranium dioxide fuel and zircaloy cladding. 2. ACCIDENT TOLERANT FUELS LAST LONGER Accident tolerant fuels will be able to last longer and operate more efficiently in a reactor core. This could potentially extend the time between refueling from 1.5 years to 2 years or more and use roughly 30% less fuel. That would mean less waste and reduced fuel costs over the life of the reactor. 3. ACCIDENT TOLERANT FUELS IMPROVE PLANT PERFORMANCE In addition to lasting longer, accident tolerant fuels are also designed to have higher burnup. This means plants would run for longer periods of time, possibly at higher power, with less downtime— leading to higher profit margins for the plants. 4. ACCIDENT TOLERANT FUELS ARE INDUSTRY-LED Framatome, General Electric (GE) and Westinghouse are leading the charge to aggressively develop new reactor fuels in an accelerated timeframe. DOE and the national labs are supporting these efforts with irradiation and safety testing, along with advanced modeling and simulation to help qualify their fuels with the U.S. Nuclear Regulatory Commission. 5. ACCIDENT TOLERANT FUELS COULD DEBUT BY 2025 Framatome, GE’s Global Nuclear Fuel and Westinghouse are currently testing their accident tolerant fuels. With support from the government and national labs, the three companies hope to commercialize their fuels and deploy them to commercial reactors by 2025. New advanced Framatome fuel pellets lined up ready to be loaded inside cladding. -Framatome Did you know nuclear does more than just produce massive amounts of clean energy? It’s used in a variety of applications, ranging from cancer treatments to fighting crime, thanks to a little thing we call radioisotopes. These are simply atoms that emit radiation and since their discovery more than a century ago, they have transformed the medical industry and other fields to help benefit society. Here are 5 ways nuclear powers our lives. 1. SPACE EXPLORATION A great deal of what we know about deep space has been made possible by radioisotope power systems (RPSs). These small nuclear power sources are used to power spaceships in the extreme environments of deep space. RPSs are proven to be safe, reliable, and maintenance-free for decades of space exploration, including missions to study Jupiter, Saturn, Mars, and Pluto. 2. NUCLEAR ENERGY Nuclear provides nearly 20% of our electricity in the United States. It’s also the nation’s largest source of clean energy—making up more than half of our emissions-free electricity. That’s more than all of the renewables combined. The nation’s fleet of reactors also operates more than 92% of the time, making it the most reliable energy source on the grid by far—and it’s not even close. 15 14 ELECTRICITY AND BEYOND… NUCLEAR: ENERGY AND BEYOND… 3. MEDICAL DIAGNOSIS AND TREATMENT Approximately one-third of all patients admitted to U.S. hospitals are diagnosed or treated using radiation or radioactive materials. Nuclear medical imaging, which combines the safe administration of radioisotopes with camera imaging, helps physicians locate tumors, size anomalies, or other problems. Doctors also use radioisotopes therapeutically to kill cancerous tissue, reduce the size of tumors, and alleviate pain. 4. CRIMINAL INVESTIGATION Criminal investigators frequently rely on radioisotopes to obtain physical evidence linking a suspect to a specific crime. They can be used to identify trace chemicals in materials such as paint, glass, tape, gunpowder, lead, and poisons. 5. AGRICULTURE Finally, farmers can use radioisotopes to control insects that destroy crops as an alternative to chemical pesticides. In this procedure, male insect pests are rendered infertile. Pest populations are then drastically reduced and, in some cases, eliminated. Nuclear energy is also harnessed to preserve our food. When food is irradiated, harmful organisms are destroyed without cooking or altering the nutritional properties of the food. It also makes chemical additives and refrigeration unnecessary, and requires less energy than other food preservation methods. WHY I DECIDED TO BECOME A NUCLEAR ENERGINEER Anna Biela is on the verge of graduating from Purdue University with a degree in nuclear engineering. She has plans to go to grad school to further her research on reactor core physics. Her goal is to develop advanced reactors—an area and a career path she is very passionate about. “I see a lot of potential in nuclear energy,” said Biela. “It’s good for the environment, and it can help stabilize the grid with affordable and reliable energy. I saw there was a space for me to contribute and I felt that what I could contribute would be of significance.” Get more STEM info here: Photo courtesy of Adobestock. Photo courtesy of Adobestock. For more information, please visit energy.gov/ne DOE/NE-0150 Disclaimer: All product and company names used in this publication are the trademarks of their respective holders. Reference herein to any specific commercial company, product, process, or service by trade name, trademark, manufacturer, or otherwise, does not necessarily constitute or imply its endorsement, recommendation, or favoring by the United States Government or any agency thereof or its contractors or subcontractors. All images not otherwise credited are U.S. Government images.
189306
https://www.omnicalculator.com/physics/diopter
Diopter Calculator Have you ever wondered what the optical power of the human eye is or wanted to convert it to focal length; our diopter calculator is at your service. Whether you need it for research purposes or are just curious by nature, our diopter calculator is the best option. You can also use it as a focal length to diopter converter. But don't think it's all about conversion because our article is full of information revolving around optical power, diopter, and focal length. To give you an idea of what we have in store for you, here are some of the topics we will be discussing: Diopter definition; How to find the power of a lens; and Diopters to focal length. What is a diopter? — Diopter definition To understand what a diopter is, first, we need to know about the power of a lens. The power of a lens is its ability to converge a beam of light falling on it. For example, a powerful lens will focus parallel light rays closer to itself and into a smaller and more intense spot. The diopter (D) is the SI unit of lens power and is defined as one divided by the focal length of the lens measured in meters. You might have seen the term diopter on eyeglasses and contact lens prescriptions issued by optometrists. ⚠️ Don't confuse this power with the concept of power you read about in work and power. How to find the power of a lens The power of a lens P is expressed as the reciprocal of its focal length f: P=f1​ To find the power in diopter (D), we need to convert the focal length into meters. P[D]=f[m]1​ You may use this as the diopter formula, as the optical power is in diopters. For example, if the focal length of a lens is 20 cm. We can calculate its power in diopters as: P[D]​=20×10−2 m1​=5 D​ How to use the diopter calculator Considering the significance, one might think that using a diopter calculator would be a piece of work. But it couldn't get any simpler. The only thing you need to determine optical power is the focal length in meters. But fret not; we also have a few unit options for that. The steps to use the focal length to diopters converter are: Input the focal length, preferably in meters, but other unit options are available (select the desired unit first, then input the value). The result is the optical power in diopters. You can use the tool the other way around as well. Input the diopter measurement to get the focal length. For example, the focal length of a lens is 70 cm. You will choose the unit as centimeters and then enter the focal length as 70. The result is the optical power as 1.4286 diopters. Converting diopter to focal length Up until now, we have seen how we can determine the power of a lens in diopters using the focal length. But you should know that this calculation is a two-way street. You can use the diopter formula to determine the focal length if you already know the power of the lens. Simply rearrange the diopter formula, and you get: f[m]=P[D]1​ Taking the reciprocal of optical power gives the focal length. For instance, the optical power is 45 diopters. We can calculate the focal length as: f​=45 D1​=0.02222 m​ Diopter with respect to magnification and focal length Now that we understand how to find the power of a lens, let's look at the relationship between diopters, magnification, and focal length. Relationship between diopters and magnification: Magnification and diopters are two distinct aspects of the same optical system, with magnification representing the relative increase in the size of an object and diopters indicating the visual or refractive power of a lens, e.g., in corrective eyewear. The higher the diopter value, the greater the optical power, translating into greater magnification. Relationship between diopters and focal length: In optics, the focal length is the distance between the lens and the point where parallel light rays converge or diverge. Diopters are the reciprocal of the focal length in meters. For example, a lens with a focal length of 0.25 meters (25 centimeters) has a refractive power of 4 diopters. You may want to try our magnification of a lens calculator. It is sure to come in handy. FAQs How do I determine the diopter of a lens from focal length? To determine the diopter of a lens from its focal length, follow these instructions: Find out the focal length f of the lens in meters. Take the reciprocal of the focal length: 1 / f. You will get the power of the lens in diopters. Or, if you want to make it hassle-free, use Omni Calculator's diopter calculator. How many diopters is the human eye? About 60 D. The focal length of the human eye is about 1.70 cm or 0.017 m. Hence, the optical power of the human eye is: P = 1 / 0.017 m ≈ 60 D What magnification is 4 diopters? With the rule of thumb that one diopter equals 0.25 magnification, 4 diopters roughly translate to 1× or 100% magnification. In case you are calculating these for medical purposes, professional guidance is recommended for accurate assessments. What magnification is the equivalent of 5 diopters? 5 diopters could yield an approximate magnification of 1.25× or 125%. Keep in mind these are rough estimates, and visual experiences can differ. If you're using these calculations for medical purposes, consulting a healthcare professional for accurate assessment is crucial. Did we solve your problem today? Check out 29 similar optics and light calculators 🔍 Angular resolution Aperture area Binoculars range
189307
https://math.stackexchange.com/questions/1902603/discrete-uniform-probability-on-a-sample-space-of-prime-cardinality
Discrete uniform probability on a sample space of prime cardinality - Mathematics Stack Exchange Join Mathematics By clicking “Sign up”, you agree to our terms of service and acknowledge you have read our privacy policy. Sign up with Google OR Email Password Sign up Already have an account? Log in Skip to main content Stack Exchange Network Stack Exchange network consists of 183 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. Visit Stack Exchange Loading… Tour Start here for a quick overview of the site Help Center Detailed answers to any questions you might have Meta Discuss the workings and policies of this site About Us Learn more about Stack Overflow the company, and our products current community Mathematics helpchat Mathematics Meta your communities Sign up or log in to customize your list. more stack exchange communities company blog Log in Sign up Home Questions Unanswered AI Assist Labs Tags Chat Users Teams Ask questions, find answers and collaborate at work with Stack Overflow for Teams. Try Teams for freeExplore Teams 3. Teams 4. Ask questions, find answers and collaborate at work with Stack Overflow for Teams. Explore Teams Teams Q&A for work Connect and share knowledge within a single location that is structured and easy to search. Learn more about Teams Hang on, you can't upvote just yet. You'll need to complete a few actions and gain 15 reputation points before being able to upvote. Upvoting indicates when questions and answers are useful. What's reputation and how do I get it? Instead, you can save this post to reference later. Save this post for later Not now Thanks for your vote! You now have 5 free votes weekly. Free votes count toward the total vote score does not give reputation to the author Continue to help good content that is interesting, well-researched, and useful, rise to the top! To gain full voting privileges, earn reputation. Got it!Go to help center to learn more Discrete uniform probability on a sample space of prime cardinality Ask Question Asked 9 years, 1 month ago Modified9 years, 1 month ago Viewed 913 times This question shows research effort; it is useful and clear 7 Save this question. Show activity on this post. My question is to show that if I have a fair die with p p faces, where p p is prime, and the experiment consists of rolling it once, no two proper events can be independent. Here is my approach: Suppose A A and B B are independent events. Then P(A∩B)=P(A)∗P(B)=|A|p∗|B|p=|A||B|p 2<1 P(A∩B)=P(A)∗P(B)=|A|p∗|B|p=|A||B|p 2<1 Not sure where a contradiction would happen to know how to proceed. Any help? probability probability-theory Share Share a link to this question Copy linkCC BY-SA 3.0 Cite Follow Follow this question to receive notifications asked Aug 24, 2016 at 21:39 user363626user363626 73 2 2 bronze badges Add a comment| 1 Answer 1 Sorted by: Reset to default This answer is useful 4 Save this answer. Show activity on this post. Neat question! The outcomes of the die are {1,2,⋯,p}{1,2,⋯,p}. Suppose we have a pair of proper events A,B A,B and we assume they are independent. I'm assuming here that "proper" means neither event is empty and neither event is equal to the whole space. If A∩B={∅}A∩B={∅} we immediately get a contradiction, so suppose that A∩B=C A∩B=C, for some proper event C C. We have: |C|p=P(A∩B)=P(A)P(B)=|A||B|p 2|C|p=P(A∩B)=P(A)P(B)=|A||B|p 2. Rearranging this gives: p|C|=|A||B|.p|C|=|A||B|. Since neither A A nor B B is the full space, 0<|A|<p 0<|A|<p and 0<|B|<p 0<|B|0|C|>0 and p p is prime, p p must divide either |A||A| or |B||B|. This is clearly a contradiction. Note that if p p was not prime (say, p=4 p=4), then you would only need the factors of p p to divide either |A|,|B||A|,|B|, so p p being prime is necessary. Share Share a link to this answer Copy linkCC BY-SA 3.0 Cite Follow Follow this answer to receive notifications answered Aug 24, 2016 at 21:49 Alex R.Alex R. 33.4k 1 1 gold badge 41 41 silver badges 80 80 bronze badges 0 Add a comment| You must log in to answer this question. Start asking to get answers Find the answer to your question by asking. Ask question Explore related questions probability probability-theory See similar questions with these tags. Featured on Meta Introducing a new proactive anti-spam measure Spevacus has joined us as a Community Manager stackoverflow.ai - rebuilt for attribution Community Asks Sprint Announcement - September 2025 Report this ad Related 2probability dice experiment 3Probability question-help! P(A)+P(B)>1 3Probability in heterogeneous sample space 2Sample Space for Bernoulli Trials & Independence 7Probability of obtaining a heads on the coin before a 1 or 2 on the die? 0Probability in Discrete Maths 0Understanding probability of independent events 1The notion of event, experiment and sample space Hot Network Questions Exchange a file in a zip file quickly Is it safe to route top layer traces under header pins, SMD IC? Direct train from Rotterdam to Lille Europe Why are LDS temple garments secret? An odd question What is the feature between the Attendant Call and Ground Call push buttons on a B737 overhead panel? Overfilled my oil How to rsync a large file by comparing earlier versions on the sending end? How to locate a leak in an irrigation system? Discussing strategy reduces winning chances of everyone! alignment in a table with custom separator Another way to draw RegionDifference of a cylinder and Cuboid Does the curvature engine's wake really last forever? With with auto-generated local variables RTC battery and VCC switching circuit How to start explorer with C: drive selected and shown in folder list? What happens if you miss cruise ship deadline at private island? The rule of necessitation seems utterly unreasonable Non-degeneracy of wedge product in cohomology How different is Roman Latin? A time-travel short fiction where a graphologist falls in love with a girl for having read letters she has not yet written… to another man Is existence always locational? What is this chess h4 sac known as? Clinical-tone story about Earth making people violent more hot questions Question feed Subscribe to RSS Question feed To subscribe to this RSS feed, copy and paste this URL into your RSS reader. Why are you flagging this comment? It contains harassment, bigotry or abuse. This comment attacks a person or group. Learn more in our Code of Conduct. It's unfriendly or unkind. This comment is rude or condescending. Learn more in our Code of Conduct. Not needed. This comment is not relevant to the post. Enter at least 6 characters Something else. A problem not listed above. Try to be as specific as possible. Enter at least 6 characters Flag comment Cancel You have 0 flags left today Mathematics Tour Help Chat Contact Feedback Company Stack Overflow Teams Advertising Talent About Press Legal Privacy Policy Terms of Service Your Privacy Choices Cookie Policy Stack Exchange Network Technology Culture & recreation Life & arts Science Professional Business API Data Blog Facebook Twitter LinkedIn Instagram Site design / logo © 2025 Stack Exchange Inc; user contributions licensed under CC BY-SA. rev 2025.9.26.34547 By clicking “Accept all cookies”, you agree Stack Exchange can store cookies on your device and disclose information in accordance with our Cookie Policy. Accept all cookies Necessary cookies only Customize settings Cookie Consent Preference Center When you visit any of our websites, it may store or retrieve information on your browser, mostly in the form of cookies. This information might be about you, your preferences, or your device and is mostly used to make the site work as you expect it to. The information does not usually directly identify you, but it can give you a more personalized experience. Because we respect your right to privacy, you can choose not to allow some types of cookies. Click on the different category headings to find out more and manage your preferences. Please note, blocking some types of cookies may impact your experience of the site and the services we are able to offer. Cookie Policy Accept all cookies Manage Consent Preferences Strictly Necessary Cookies Always Active These cookies are necessary for the website to function and cannot be switched off in our systems. They are usually only set in response to actions made by you which amount to a request for services, such as setting your privacy preferences, logging in or filling in forms. You can set your browser to block or alert you about these cookies, but some parts of the site will not then work. These cookies do not store any personally identifiable information. Cookies Details‎ Performance Cookies [x] Performance Cookies These cookies allow us to count visits and traffic sources so we can measure and improve the performance of our site. They help us to know which pages are the most and least popular and see how visitors move around the site. All information these cookies collect is aggregated and therefore anonymous. If you do not allow these cookies we will not know when you have visited our site, and will not be able to monitor its performance. Cookies Details‎ Functional Cookies [x] Functional Cookies These cookies enable the website to provide enhanced functionality and personalisation. They may be set by us or by third party providers whose services we have added to our pages. If you do not allow these cookies then some or all of these services may not function properly. Cookies Details‎ Targeting Cookies [x] Targeting Cookies These cookies are used to make advertising messages more relevant to you and may be set through our site by us or by our advertising partners. They may be used to build a profile of your interests and show you relevant advertising on our site or on other sites. They do not store directly personal information, but are based on uniquely identifying your browser and internet device. Cookies Details‎ Cookie List Clear [x] checkbox label label Apply Cancel Consent Leg.Interest [x] checkbox label label [x] checkbox label label [x] checkbox label label Necessary cookies only Confirm my choices
189308
https://www.quora.com/What-is-the-short-method-to-find-the-middle-term-in-a-binomial-expansion
What is the short method to find the middle term in a binomial expansion? - Quora Something went wrong. Wait a moment and try again. Try again Skip to content Skip to search Sign In Mathematics Binomial Expression Formulas in Maths Algebra Class Mathematical Equations Expanding Binomials Binomial Theorem Binomial Coefficients Mathematics and Algebra 5 What is the short method to find the middle term in a binomial expansion? All related (35) Sort Recommended Gopal Menon B Sc (Hons) in Mathematics, Indira Gandhi National Open University (IGNOU) (Graduated 2010) · Author has 10.2K answers and 15.2M answer views ·6y What is the short method to find the middle term in a binomial expansion? The binomial expansion is (a+b)n=n∑r=0 C(n,r)a n−r b r.(a+b)n=∑r=0 n C(n,r)a n−r b r. If n n is even, the total number of terms in the binomial expansion is odd and the middle term is the (n+2 2)t h(n+2 2)t h term. If n n is odd, the total number of terms in the binomial expansion is even and there are two middle terms which are the (n+1 2)t h(n+1 2)t h and (n+3 2)t h(n+3 2)t h terms. For the k t h k t h term, r=k−1.r=k−1. ⇒⇒ For the (n+i 2)t h(n+i 2)t h term, r=n+i 2−1=n−2+i 2.r=n+i 2−1=n−2+i 2. Th Continue Reading What is the short method to find the middle term in a binomial expansion? The binomial expansion is (a+b)n=n∑r=0 C(n,r)a n−r b r.(a+b)n=∑r=0 n C(n,r)a n−r b r. If n n is even, the total number of terms in the binomial expansion is odd and the middle term is the (n+2 2)t h(n+2 2)t h term. If n n is odd, the total number of terms in the binomial expansion is even and there are two middle terms which are the (n+1 2)t h(n+1 2)t h and (n+3 2)t h(n+3 2)t h terms. For the k t h k t h term, r=k−1.r=k−1. ⇒⇒ For the (n+i 2)t h(n+i 2)t h term, r=n+i 2−1=n−2+i 2.r=n+i 2−1=n−2+i 2. Then, n−r=n−n−2+i 2=n+2−i 2.n−r=n−n−2+i 2=n+2−i 2. ⇒⇒ If n n is even, i=2 i=2 and the middle term is, C(n,r)a n−r b r=C(n,n 2)a(n 2)b(n 2).C(n,r)a n−r b r=C(n,n 2)a(n 2)b(n 2). ⇒⇒ If n n is odd, i=1 i=1 or 3 3 and the middle terms are, C(n,r)a n−r b r=C(n,n−1 2)a(n+1 2)b(n−1 2),C(n,r)a n−r b r=C(n,n−1 2)a(n+1 2)b(n−1 2), and, C(n,r)a n−r b r=C(n,n+1 2)a(n−1 2)b(n+1 2).C(n,r)a n−r b r=C(n,n+1 2)a(n−1 2)b(n+1 2). Upvote · 9 6 9 1 Promoted by Coverage.com Johnny M Master's Degree from Harvard University (Graduated 2011) ·Updated Sep 9 Does switching car insurance really save you money, or is that just marketing hype? This is one of those things that I didn’t expect to be worthwhile, but it was. You actually can save a solid chunk of money—if you use the right tool like this one. I ended up saving over $1,500/year, but I also insure four cars. I tested several comparison tools and while some of them ended up spamming me with junk, there were a couple like Coverage.com and these alternatives that I now recommend to my friend. Most insurance companies quietly raise your rate year after year. Nothing major, just enough that you don’t notice. They’re banking on you not shopping around—and to be honest, I didn’t. Continue Reading This is one of those things that I didn’t expect to be worthwhile, but it was. You actually can save a solid chunk of money—if you use the right tool like this one. I ended up saving over $1,500/year, but I also insure four cars. I tested several comparison tools and while some of them ended up spamming me with junk, there were a couple like Coverage.com and these alternatives that I now recommend to my friend. Most insurance companies quietly raise your rate year after year. Nothing major, just enough that you don’t notice. They’re banking on you not shopping around—and to be honest, I didn’t. It always sounded like a hassle. Dozens of tabs, endless forms, phone calls I didn’t want to take. But recently I decided to check so I used this quote tool, which compares everything in one place. It took maybe 2 minutes, tops. I just answered a few questions and it pulled up offers from multiple big-name providers, side by side. Prices, coverage details, even customer reviews—all laid out in a way that made the choice pretty obvious. They claimed I could save over $1,000 per year. I ended up exceeding that number and I cut my monthly premium by over $100. That’s over $1200 a year. For the exact same coverage. No phone tag. No junk emails. Just a better deal in less time than it takes to make coffee. Here’s the link to two comparison sites - the one I used and an alternative that I also tested. If it’s been a while since you’ve checked your rate, do it. You might be surprised at how much you’re overpaying. Upvote · 999 485 999 103 99 17 Related questions More answers below What is the short trick to find middle term in binomial expression? Is the coefficient of a middle term in a binomial expansion the greatest coefficient in the whole expansion? How do you get the middle term of a binomial? Why do we take the ratio of consecutive terms to find the greatest term in a binomial expansion? How do I find the middle term of the expansion (1+x)2 n(1+x)2 n ? Assistant Bot · 1y To find the middle term in a binomial expansion of the form (a+b)n(a+b)n, you can use the following method: Identify n n: Determine the exponent n n in the expression (a+b)n(a+b)n. Determine the number of terms: The expansion contains n+1 n+1 terms. Find the middle term: If n n is even, the middle term is the n 2+1 n 2+1 th term. If n n is odd, the middle terms are the n+1 2 n+1 2 th and n+1 2+1 n+1 2+1 th terms. Use the binomial coefficient: The k k th term in the expansion is given by: T k=(n k−1)a n−(k−1)b k−1 T k=(n k−1)a n−(k−1)b k−1 where k k is the term number (starting from 1). Example: For (x+y)(x+y) Continue Reading To find the middle term in a binomial expansion of the form (a+b)n(a+b)n, you can use the following method: Identify n n: Determine the exponent n n in the expression (a+b)n(a+b)n. Determine the number of terms: The expansion contains n+1 n+1 terms. Find the middle term: If n n is even, the middle term is the n 2+1 n 2+1 th term. If n n is odd, the middle terms are the n+1 2 n+1 2 th and n+1 2+1 n+1 2+1 th terms. Use the binomial coefficient: The k k th term in the expansion is given by: T k=(n k−1)a n−(k−1)b k−1 T k=(n k−1)a n−(k−1)b k−1 where k k is the term number (starting from 1). Example: For (x+y)6(x+y)6 (where n=6 n=6, which is even): The middle term is the 6 2+1=4 6 2+1=4 th term. Calculate it: T 4=(6 3)x 6−3 y 3=(6 3)x 3 y 3=20 x 3 y 3 T 4=(6 3)x 6−3 y 3=(6 3)x 3 y 3=20 x 3 y 3 For (x+y)5(x+y)5 (where n=5 n=5, which is odd): The middle terms are the 5+1 2=3 5+1 2=3 rd and 5+1 2+1=4 5+1 2+1=4 th terms. Calculate: T 3=(5 2)x 5−2 y 2=10 x 3 y 2 T 3=(5 2)x 5−2 y 2=10 x 3 y 2 T 4=(5 3)x 5−3 y 3=10 x 2 y 3 T 4=(5 3)x 5−3 y 3=10 x 2 y 3 This method allows you to quickly find the middle term(s) in a binomial expansion. Upvote · Klaus Hauptfleisch B.A. from University of Bonn (Graduated 1985) · Author has 8.4K answers and 2.7M answer views ·Jun 17 Originally Answered: What is the short trick to find middle term in binomial expression? · The binomial formulae (a \pm b)^n expand to (n k)⋅a n−k⋅−b k=n!k!⋅(n−k)!⋅a n−k⋅−b k=(n k)⋅a n−k⋅−b k=n!k!⋅(n−k)!⋅a n−k⋅−b k= n⋅(n−1)⋅(n−2)⋅(n−3)...k!⋅a n−k⋅−b k n⋅(n−1)⋅(n−2)⋅(n−3)...k!⋅a n−k⋅−b k for each of the n+1 terms, where k k is the term minus 1, because in the first term k=0 k=0, and !! means factorial (multiplying downwards or upwards, i.e. 7!=7×6×5×4×3×2×1=5,040 7!=7×6×5×4×3×2×1=5,040). If the exponent is even there is only one middle term, otherwise two. (a−b)16=a 16−16 a 15 b+120 a 14 b 2–560 a 13 b 3+1,820 a 12 b 4–4,368 a 11 b 5+(a−b)16=a 16−16 a 15 b+120 a 14 b 2–560 a 13 b 3+1,820 a 12 b 4–4,368 a 11 b 5+ 8,008 a 10 b 6–11,440 a 9 b 7+12,870 a 8 b 8−8,008 a 10 b 6–11,440 a 9 b 7+12,870 a 8 b 8− Continue Reading The binomial formulae (a \pm b)^n expand to (n k)⋅a n−k⋅−b k=n!k!⋅(n−k)!⋅a n−k⋅−b k=(n k)⋅a n−k⋅−b k=n!k!⋅(n−k)!⋅a n−k⋅−b k= n⋅(n−1)⋅(n−2)⋅(n−3)...k!⋅a n−k⋅−b k n⋅(n−1)⋅(n−2)⋅(n−3)...k!⋅a n−k⋅−b k for each of the n+1 terms, where k k is the term minus 1, because in the first term k=0 k=0, and !! means factorial (multiplying downwards or upwards, i.e. 7!=7×6×5×4×3×2×1=5,040 7!=7×6×5×4×3×2×1=5,040). If the exponent is even there is only one middle term, otherwise two. (a−b)16=a 16−16 a 15 b+120 a 14 b 2–560 a 13 b 3+1,820 a 12 b 4–4,368 a 11 b 5+(a−b)16=a 16−16 a 15 b+120 a 14 b 2–560 a 13 b 3+1,820 a 12 b 4–4,368 a 11 b 5+ 8,008 a 10 b 6–11,440 a 9 b 7+12,870 a 8 b 8−11,440 a 7 b 9+8,008 a 6 b 10−8,008 a 10 b 6–11,440 a 9 b 7+12,870 a 8 b 8−11,440 a 7 b 9+8,008 a 6 b 10− 4,368 a 5 b 11+1,820 a 4 b 12−560 a 3 b 13+120 a 2 b 14−16 a b 15+b 16 4,368 a 5 b 11+1,820 a 4 b 12−560 a 3 b 13+120 a 2 b 14−16 a b 15+b 16 So the middle term here is that with 12,870a^8b^8. Your response is private Was this worth your time? This helps us sort answers on the page. Absolutely not Definitely yes Upvote · Klaus Hauptfleisch B.A. from University of Bonn (Graduated 1985) · Author has 8.4K answers and 2.7M answer views ·11mo The general formula for binomial expansions of (a−b)n(a−b)n is (n k)⋅a n−k⋅−b k=(n k)⋅a n−k⋅−b k= n!k!⋅(n−k)!=n⋅(n−1)⋅(n−2)⋅(n−3)...k!n!k!⋅(n−k)!=n⋅(n−1)⋅(n−2)⋅(n−3)...k! n n is the exponent and k k is the term minus 1 each time, !! means factorial (e.g. 6!=6⋅5⋅4⋅3⋅2⋅1=720 6!=6⋅5⋅4⋅3⋅2⋅1=720) . And in any case there are k+1 k+1 terms for the exponent n n. This means that only binomial exansions with even n n's have one middle term. Here is an example: 0.98 10=(1−0.02)10 0.98 10=(1−0.02)10 The middle term is (10 5)⋅1 5⋅−0.02 5=(10 5)⋅1 5⋅−0.02 5= 252\cdot 1^252\cdot 1^ Continue Reading The general formula for binomial expansions of (a−b)n(a−b)n is (n k)⋅a n−k⋅−b k=(n k)⋅a n−k⋅−b k= n!k!⋅(n−k)!=n⋅(n−1)⋅(n−2)⋅(n−3)...k!n!k!⋅(n−k)!=n⋅(n−1)⋅(n−2)⋅(n−3)...k! n n is the exponent and k k is the term minus 1 each time, !! means factorial (e.g. 6!=6⋅5⋅4⋅3⋅2⋅1=720 6!=6⋅5⋅4⋅3⋅2⋅1=720) . And in any case there are k+1 k+1 terms for the exponent n n. This means that only binomial exansions with even n n's have one middle term. Here is an example: 0.98 10=(1−0.02)10 0.98 10=(1−0.02)10 The middle term is (10 5)⋅1 5⋅−0.02 5=(10 5)⋅1 5⋅−0.02 5= 252⋅1 5⋅−0.02 5=252⋅−0.0000000032=−0.0000008064 252⋅1 5⋅−0.02 5=252⋅−0.0000000032=−0.0000008064 Upvote · Related questions More answers below In binomial expansion, the middle term is the largest. Is there any expansion where the middle term is the smallest and the extremes are the largest? Is the nth term our largest expansion in binomial distribution? How does one find the largest (absolute) coefficient of a binomial, is the middle term always the largest? How do I find terms in a binomial expansion? What is the difference between the greatest term of a binomial expansion and the term having greatest numerical coefficient in that expansion? Christopher Pellerito Neither pays for, nor charges for, Quora content · Author has 6.3K answers and 12M answer views ·6y Originally Answered: What is the short trick to find middle term in binomial expression? · Let’s say you want the middle term in (a+b)^50. That term is 50C25 a^25 b^25. 50C25 = 50!/(25! 25!) = 126,410,606,437,752 So it’s 126,410,606,437,752 a^25 b^25. Or in general, for an even value of n: the middle term is nC(n/2) a^(n/2) b^(n/2) Upvote · Sponsored by Grammarly Is your writing working as hard as your ideas? Grammarly’s AI brings research, clarity, and structure—so your writing gets sharper with every step. Learn More 999 116 Daniel Claydon Learning mathematics · Author has 779 answers and 4.3M answer views ·6y Related How does one find the largest (absolute) coefficient of a binomial, is the middle term always the largest? Yeah, the middle term is always the biggest. A simple proof is as follows: The binomial coefficient (n k)(n k) is given by the formula (n k)=n!k!(n−k)!(n k)=n!k!(n−k)! A bit of algebra shows (n k)=n−k+1 k(n k−1)(n k)=n−k+1 k(n k−1) Hence, (n k)(n k) is greater than (n k−1)(n k−1) if, and only if, n−k+1 k>1 n−k+1 k>1. This happens exactly when n+1>2 k n+1>2 k. If n n is even, then the largest value k k for which this holds is n 2 n 2 so the binomial coefficients are increasing in magnitude up until (n n/2)(n n/2) at which point they start decreasing agai Continue Reading Yeah, the middle term is always the biggest. A simple proof is as follows: The binomial coefficient (n k)(n k) is given by the formula (n k)=n!k!(n−k)!(n k)=n!k!(n−k)! A bit of algebra shows (n k)=n−k+1 k(n k−1)(n k)=n−k+1 k(n k−1) Hence, (n k)(n k) is greater than (n k−1)(n k−1) if, and only if, n−k+1 k>1 n−k+1 k>1. This happens exactly when n+1>2 k n+1>2 k. If n n is even, then the largest value k k for which this holds is n 2 n 2 so the binomial coefficients are increasing in magnitude up until (n n/2)(n n/2) at which point they start decreasing again. If n n is odd, we get essentially the same thing, except this time (2 N+1 N)(2 N+1 N) and (2 N+1 N+1)(2 N+1 N+1) are tied for top spot. Upvote · 99 12 9 1 Ragu Rajagopalan Passionate Maths solver ;Reviving knowledge after 3 decades · Author has 10.1K answers and 7.6M answer views ·Updated 7y Related How do I find the middle term in the expansion of (x-1/x) ^n(1-x) ^n in powers of x? Given expression is (x−1 x)n∗(1−x)n=y=A∗B(x−1 x)n∗(1−x)n=y=A∗B (say) The highest power of x = x 2 n x 2 n The lowest power of x = (1/x)n=x−n(1/x)n=x−n Total number of terms when expanded = 2n - (-n) + 1 = 3n+1 Total number of terms = even when n is odd and => There is no single middle term available (EDIT — CORRECTED) Total number of terms = odd when n is even The middle term = k=(3 n+1)2 k=(3 n+1)2, if n is odd. — (1) We need to find the coefficient of x k x k from the given expression for y. Let us find out the general term of y. General term of A = (x−1 x)n=(n i)∗x i∗(−1 x)n−i=(x−1 x)n=(n i)∗x i∗(−1 x)n−i= Continue Reading Given expression is (x−1 x)n∗(1−x)n=y=A∗B(x−1 x)n∗(1−x)n=y=A∗B (say) The highest power of x = x 2 n x 2 n The lowest power of x = (1/x)n=x−n(1/x)n=x−n Total number of terms when expanded = 2n - (-n) + 1 = 3n+1 Total number of terms = even when n is odd and => There is no single middle term available (EDIT — CORRECTED) Total number of terms = odd when n is even The middle term = k=(3 n+1)2 k=(3 n+1)2, if n is odd. — (1) We need to find the coefficient of x k x k from the given expression for y. Let us find out the general term of y. General term of A = (x−1 x)n=(n i)∗x i∗(−1 x)n−i=(x−1 x)n=(n i)∗x i∗(−1 x)n−i= (−1)n−i(n i)x i−n+i=(−1)n−i(n i)x 2 i−n(−1)n−i(n i)x i−n+i=(−1)n−i(n i)x 2 i−n General term of B = (1−x)n=(n j)(−x)n−j=(1−x)n=(n j)(−x)n−j= (−1)n−j∗(n j)(x)n−j(−1)n−j∗(n j)(x)n−j y=A∗B=∑(−1)n−i∗(n i)∗x 2 i−n∗(−1)n−j∗(n j)∗(x)n−j y=A∗B=∑(−1)n−i∗(n i)∗x 2 i−n∗(−1)n−j∗(n j)∗(x)n−j =>y=∑(−1)2 n−i−j∗(n i)∗(n j)x 2 i−n+n−j=>y=∑(−1)2 n−i−j∗(n i)∗(n j)x 2 i−n+n−j =>y=∑(−1)2 n−i−j∗(n i)∗(n j)x 2 i−j—(2)=>y=∑(−1)2 n−i−j∗(n i)∗(n j)x 2 i−j—(2) Since we are interested in the x k x k k=(3 n+1)2=2 i−j k=(3 n+1)2=2 i−j =>j=4 i−3 n−1 2—(3)=>j=4 i−3 n−1 2—(3) From (2) and (3), the middle term would be ∑(−1)2 n−i−j∗(n i)∗(n j)∗x(3 n+1)2∑(−1)2 n−i−j∗(n i)∗(n j)∗x(3 n+1)2 subjected to the following conditions: i=0 i=0 to n n 4 i−3 n−1>0=>4 i>3 n+1 4 i−3 n−1>0=>4 i>3 n+1 j = 4 i−3 n−1 2 4 i−3 n−1 2 Upvote · 99 29 Sponsored by Stake Stake: Online Casino games - Play & Win Online. Play the best online casino games, slots & live casino games! Unlock VIP bonuses, bet with crypto & win. Play Now 999 286 Ragu Rajagopalan Passionate Maths solver ;Reviving knowledge after 3 decades · Author has 10.1K answers and 7.6M answer views ·6y Related What is the binomial expansion of (1+x) ^-1/2? You can refer to the solution in wiki. Taylor series - Wikipedia Continue Reading You can refer to the solution in wiki. Taylor series - Wikipedia Upvote · 99 30 9 1 D Sreenivasa Rao MSc; M.Phil in Mathematics, SV University, Tirupathi · Author has 3.2K answers and 2.8M answer views ·2y Related What is the middle term in the expansion of (x+1/x) ^2n? Upvote · 9 5 Sponsored by CDW Corporation How can AI help your teams make faster decisions? CDW’s AI solutions offer retrieval-augmented generation (RAG) to expedite info with stronger insights. Learn More 99 21 Shreyansh Pandey Programmer, Calculus, Mentally Incorrect ·9y Related Is there a more general and simpler approach for finding the coefficients in binomial expansion quickly? Well, yes. I will describe two methods to quickly calculate the coefficients of any binomial expansion. Method 1 -- The Pascal's Triangle Voilá! In view, a humble vaudevillian triangle, cast vicariously both as victim and villain by the vistitudes of mathematics. What I am talking about is the Pascal's Triangle. Personally, I don't think that it's fair to call it the Pascal's Triangle as it was discovered WAY before that by Indian and Iranian Mathematicians; but since Blaize contributed so much to it's patterns, let's just call it what it is. The triangle looks like a neatly stacked pile of num Continue Reading Well, yes. I will describe two methods to quickly calculate the coefficients of any binomial expansion. Method 1 -- The Pascal's Triangle Voilá! In view, a humble vaudevillian triangle, cast vicariously both as victim and villain by the vistitudes of mathematics. What I am talking about is the Pascal's Triangle. Personally, I don't think that it's fair to call it the Pascal's Triangle as it was discovered WAY before that by Indian and Iranian Mathematicians; but since Blaize contributed so much to it's patterns, let's just call it what it is. The triangle looks like a neatly stacked pile of numbers, but it's actually quite intriguing. The best part: you can calculate the binomial expansions of n t h n t h degree. Let's, first, see how you create the Pascal's Triangle: Imagine 1 surrounded by zeroes: 0 1 0 0 1 0. Now add 1 to both the sides and you'll get: 0 1 1 0 0 1 1 0. You keep on doing this, and eventually you'll get a pretty long triangle. For the sake of simplicity, let's just get the picture of the completed triangle here: courtsey Wikipedia Each row, represents the binomial expansion of to the n. The counting of the rows begin with 0, so the first row, is actually row 0. Another thing you should know is that the power of the first term decreases from n to 0. So, if it was x 3 x 3 in the first term, it'll be x 0=1 x 0=1 in the last term. It's the opposite with the second term: it increases from 0 to n. With the theory aside, let's see how to see these coefficients. Since the first row is row 0, it'll give the binomial expansion of (x+y)0=1(x+y)0=1, so it's 1. The second row is 1, so it'll give the binomial expansion of (x+y)1=1 x+1 y(x+y)1=1 x+1 y. Let's take row 3, so (x+y)3(x+y)3: 1 x 3 y 0+3 x 2 y 1+3 x 1 y 2+1 x 0 y 3 1 x 3 y 0+3 x 2 y 1+3 x 1 y 2+1 x 0 y 3. So, you see, you get the coefficients here. Easy enough? Yeah, probably. Time consuming, yeah! Let's see the other method; now, this method is actually the back-bone of the Pascal's Triangle. Method 2 -- The Combination Formula In sequences and series, we come across this beautiful formula: (n k)(n k) which means that out of n elements, what are the number of possible combinations in which I can choose k number of elements. This formula is further defined as: (n k)⟹n!k!(n−k)!(n k)⟹n!k!(n−k)! Here, the n! operation is the product from 1 to that number; so: n!⟹1×2×3×…×n 4!⟹4×3×2×1=24 n!⟹1×2×3×…×n 4!⟹4×3×2×1=24 Now, in the formula, the n is the row (or the power) and the k is the number of element for which you want to calculate the coefficient. Say, you want to calculate the 2nd coefficient of the binomial expansion: (x+y)3(x+y)3 (again, I know :P). We know that it is 3, but let's try it with the formula. (3 2)⟹3!2!(3−2)!⟹3!2!(1)!⟹3!2!⟹6 2⟹3(3 2)⟹3!2!(3−2)!⟹3!2!(1)!⟹3!2!⟹6 2⟹3 Indeed, it is 3. In case you're interested in knowing the derivation of the Pascal's Triangle, and for computing the expansion itself of the equation, I'll be happy to tell you. :) Upvote · 9 1 Lance Everett Studied Nanoengineering&Mathematics at University of California, San Diego · Author has 1.1K answers and 1.5M answer views ·Updated 6y Related How does one find the largest (absolute) coefficient of a binomial, is the middle term always the largest? A2A Yes, the middle terms are always the largest. The proof I provide is very computational, I am sure there are nicer combinatorial proofs that do not use meromorphic extensions (sorry for the jargon) or calculus. For instance, one might show that the binomial coefficients are symmetrical and unimodal less directly. But alas I am not a mathematician, especially I am not a combinatorist, and even more especially, I am very lazy. The binomial coefficients satisfy a recursive equation: (n+1 k)=(n k)+(n k−1)(n+1 k)=(n k)+(n k−1) Given that (n 0)=1(n 0)=1, this can be used Continue Reading A2A Yes, the middle terms are always the largest. The proof I provide is very computational, I am sure there are nicer combinatorial proofs that do not use meromorphic extensions (sorry for the jargon) or calculus. For instance, one might show that the binomial coefficients are symmetrical and unimodal less directly. But alas I am not a mathematician, especially I am not a combinatorist, and even more especially, I am very lazy. The binomial coefficients satisfy a recursive equation: (n+1 k)=(n k)+(n k−1)(n+1 k)=(n k)+(n k−1) Given that (n 0)=1(n 0)=1, this can be used to calculate the binomial coefficients, either by hand or by using a computer program. It essentially means you can use Pascal’s triangle. Proving this formula is a matter of how the binomial coefficients are defined. For instance, we can define them equivalently algebraically, combinatorially, or analytically. Lance Everett's answer to How do you prove(n+1 r)=(n r)+(n r−1)(n+1 r)=(n r)+(n r−1)? It turns out that (n k)=∞∏j=1(1+k(n−k)j(n+j))(n k)=∏j=1∞(1+k(n−k)j(n+j)) (See attached proof: Extension of Binomial Coefficients with Standard Finite Difference Calculus—a q-analogue of Euler’s Formula for the Factorial by Lance Everett) Using this form, we can differentiate both sides with respect to k k, and find that the derivative vanishes only when k=n/2 k=n/2. This implies that the function achieves a maximum in the middle. In the same post above I provide a formula that can be deduced from the analytic form: For any positive integer M≥2 M≥2 (n k)=(n+M)n(k+M)−k(n−k+M)n−k(1+k(n−k)M(n+M))1 2−M exp∫M+n−k M+n(1 s+1 2 s 2)d s exp∫M+n−k M+n ψ(2)(s)d s M−1∏j=1(1+k(n−k)j(n+j))(n k)=(n+M)n(k+M)−k(n−k+M)n−k(1+k(n−k)M(n+M))1 2−M exp⁡∫M+n M+n−k(1 s+1 2 s 2)d s exp⁡∫M+n M+n−k ψ(2)(s)d s∏j=1 M−1(1+k(n−k)j(n+j)) where ψ(2)ψ(2) is a Polygamma function - Wikipedia ψ(1)(s)=1 s+1 2 s 2+1 s∑k∈N B 2 k s 2 k ψ(1)(s)=1 s+1 2 s 2+1 s∑k∈N B 2 k s 2 k ψ(2)(s)=D s ψ(1)(s)=−1 s 2−1 s 3−1 2 s 4+⋯ψ(2)(s)=D s ψ(1)(s)=−1 s 2−1 s 3−1 2 s 4+⋯ This can be used to provide an on-order estimation for the binomial coefficients without using recursion, since for sufficiently large M M, the factor with the integrals is approximately 1 1. Upvote · 9 2 Philip Lloyd Former Specialist Calculus Teacher and Mentor.. · Author has 6.8K answers and 52.8M answer views ·1y Related What is the method for finding the coefficient of x in a binomial series? I hope you don’t just want to use standard formulae to do everything for you. I strongly suggest you will get a better feeling for these expansions if you remember the following easy-to-remember pattern… The pattern is much easier to remember than a formula for the general term! Continue Reading I hope you don’t just want to use standard formulae to do everything for you. I strongly suggest you will get a better feeling for these expansions if you remember the following easy-to-remember pattern… The pattern is much easier to remember than a formula for the general term! Upvote · 9 4 John Pereira Retired lecturer (Maths) · Author has 1.6K answers and 902.8K answer views ·Updated 1y Related What is the reason for the middle term always being larger in a binomial expansion? There are several reasons. In the expansion of (a + b)^n, the general term is T(r+1) = nCr. a ^ (n—r).b^r. This is the term with rank c=r+1, i.e (r+1) th term Consider the binomial coefficients, their rank and the corresponding r as given below. Binomial coefficient.: nC0……..nC1,,,,,,,,,nC2……..nCn …Rank c of the term 1……………2………….3…………n+1 Corresponding r:…… 0……………1………….2……………n The next thing you must recall is that nCr = nC(n—r) Thus 10C7 = 10C3, 10C4 = 10C6,etc. Lastly, consider (n+1)! (n—1)!/ [n!.n!] = (n+1).n!.(n—1)!/[n!. n.(n—1)!]=(n+1)/n >1 So, (n+1)! (n—1) ! >n!.n! (This is the most important part Continue Reading There are several reasons. In the expansion of (a + b)^n, the general term is T(r+1) = nCr. a ^ (n—r).b^r. This is the term with rank c=r+1, i.e (r+1) th term Consider the binomial coefficients, their rank and the corresponding r as given below. Binomial coefficient.: nC0……..nC1,,,,,,,,,nC2……..nCn …Rank c of the term 1……………2………….3…………n+1 Corresponding r:…… 0……………1………….2……………n The next thing you must recall is that nCr = nC(n—r) Thus 10C7 = 10C3, 10C4 = 10C6,etc. Lastly, consider (n+1)! (n—1)!/ [n!.n!] = (n+1).n!.(n—1)!/[n!. n.(n—1)!]=(n+1)/n >1 So, (n+1)! (n—1) ! >n!.n! (This is the most important part of the solution) Likewise, (n+2)! (n—2)!.> n!.n! [(n/2) +1]!.[(n/2)—1] ! > (n/2)! (n/2)!, and so on With all these ingredients we are ready to cook the goose. Let n be even. Then there is one middle coefficient call it M. The rank of this term is c= (n/2) +1and r=(n/2). The term on its right, call it R, has c =(n/2) +2 and r=(n/2) + 1 The term on its left, call it L, is equal to R due to nCr = nC(n—r) Recall M has r= (n/2). So, M =n!/ [(n/2)! (n/2)!]………(1) R has r =(n/2) + 1. So R = n!/ {[n/2)+1)]![(n/2)—1]!}….(2) Now [(n/2)+ 1]! [ (n/2)—1] ! > (n/2)! (n/2)! Therefore, 1/ {[ (n/2)+1]! [ (n/2)—1]!} < 1/ [ (n/2) !. (n/2) !] (taking reciprocals) So, n!/{ [ (n/2) +1]! [ (n/2) —1] !} < n!/ [ (n/2) ! (n/2) !] That is, R < M. Then L < M follows since R = L When n is odd there are two middle coefficients, call them M1 and M2 For M1, c = (n+1)/2 and r = [(n+1) /2] —1 For M2, c = (n+3)/2 and r =[(n+3)/2] —1 You may continue from here. Upvote · Related questions What is the short trick to find middle term in binomial expression? Is the coefficient of a middle term in a binomial expansion the greatest coefficient in the whole expansion? How do you get the middle term of a binomial? Why do we take the ratio of consecutive terms to find the greatest term in a binomial expansion? How do I find the middle term of the expansion (1+x)2 n(1+x)2 n ? In binomial expansion, the middle term is the largest. Is there any expansion where the middle term is the smallest and the extremes are the largest? Is the nth term our largest expansion in binomial distribution? How does one find the largest (absolute) coefficient of a binomial, is the middle term always the largest? How do I find terms in a binomial expansion? What is the difference between the greatest term of a binomial expansion and the term having greatest numerical coefficient in that expansion? In binomial expansion, what is the meaning of x is small and x is large? How do you find the greatest term in the binomial expansion? What does it mean if r is negative when it comes to finding the r of a term independent of X (the constant) in a binomial expansion? What is the coefficient of the 18th term of the expansion of the following binomial? (2 − x) 20 (2 marks)? What is the 4th term of the binomial expansion of (10 + 10) 10? Related questions What is the short trick to find middle term in binomial expression? Is the coefficient of a middle term in a binomial expansion the greatest coefficient in the whole expansion? How do you get the middle term of a binomial? Why do we take the ratio of consecutive terms to find the greatest term in a binomial expansion? How do I find the middle term of the expansion (1+x)2 n(1+x)2 n ? In binomial expansion, the middle term is the largest. Is there any expansion where the middle term is the smallest and the extremes are the largest? Advertisement About · Careers · Privacy · Terms · Contact · Languages · Your Ad Choices · Press · © Quora, Inc. 2025
189309
https://www.youtube.com/watch?v=yP62Mov69mY
Hemoglobin Chapter 33 (part 7/9) Guyton and Hall text book of physiology. Medical Gateway 71800 subscribers 1124 likes Description 79984 views Posted: 12 Aug 2021 To buy ‘Medical Gateway – Lecture Notes’ visit our Instagram page. Instagram page: ‘medicalgateway9’ Instagram page link: In this video we will study Hemoglobin. The topic of Hemoglobin will be covered from chapter 33 of Guyton and Hall Text Book Of Medical Physiology. Cell Physiology Playlist: Nerve and Muscle Physiology Playlist: Cardiac Physiology Playlist: Circulatory Physiology Playlist: Blood Physiology Playlist: Respiratory Physiology Playlist: #hemoglobin #bloodphysiology #erythropoiesis #hemoglobin #vitaminb12 #folicacid #pernicious anemia #anemia #polycythemia #guytonandhallphysiologylectures #guyton #GuytonandHallTextBookOfMedicalPhysiology #Guyton​andhall #cell #MBBS​ #BDS #goodgrades​ #urdu​ #hindi​ #physiology​ #cell​physiology #guyton​chapter9 #1styearMBBSlectures #2ndyearMBBSlectures 35 comments Transcript: ओम शांति भगवान हम पढेंगे इटर्नशिप नंबर 233 लेक्सेटिव हीमोग्लोबिन फॉरमेशन आफ ए प्रेसिडेंट और हीमोग्लोबिन का काम होता है कैरी करना ठीक है और यह इस तरह से और इसे अब लास्ट में ही और बेसिकली आप यहां से पॉलीपेप्टाइड भी कह सकते हैं ठीक है जो प्रोटीन के साथ ही और अ कि चीन को क्या तो अब हमने बनानी है हीम क्रीम कैसे बनती है तो एंट्री चैन रिएक्शन से ही बन जाएगी अब ही मिश्रा बनती है कि 269 व कुएं और Tubelight सीन मिलकर एक पैरोल बनाते हैं एंड स्पाइरल इस ड्रेस जिसम विद ब्लू कलर यह जो स्ट्रक्चर है यह पैरोल है 22 जुलाई सीन और उल्लास इन और दो सक्सेना को आपस में मिल कर पाए रोल बनाते हैं अच्छा ठीक है अब इसी तरह के पोर्स स्ट्रक्चर्स इन कि फिर पैरोल मिल कर एक प्रोटो पौन इंच यह प्रोफाइल रिंग बनाते हैं तो यह पैरोल पर यह 1234 यह चार पैरोल ने आपस में मिलकर एक प्रॉपर एंड नोएडा बना दिया प्रॉन इन रेंज है अगर इसके अंदर आयरन ऐड करने तो यह बन जाती है ही हम यहां पर इसमें इस तरह आयरन ऐड कर देंगे तो अब यह क्रीम ट्यूब से इस तरह के रोल बना लेंगे और दूसरे के साथ अगर आप तो बेइज्जती यह अब इसमें बीच-बीच में में आयरन कि साफ-साफ बनाए होते हैं ठीक है अच्छा आप इस्लाम के साथ अगर हम इसके अंदर एक ग्लोबल प्रोटीन अटैच करने प्वाइंट्स 208 की स्थिति यह ग्लोबल प्रोटीन अगर हम अटैच कर दें तो यह सारा बन गया ही मोड को बंद शिव व स्पाइरल इस रिपोर्ट को फ्रंटलाइन था और प्रोफेशनल प्लस आयरन बॉयज कि भीम और हम यह पूरा हिम बन गया और ही प्लस 10 पॉलिग्राफ बंद सारा बन गया हीमोग्लोबिन यह सिंगल हीमोग्लोबिन है अच्छा ठीक है और इस तरह की फ्लोर चेंज आपस में मिलकर एक हीमोग्लोबिन ऑयल बनाएंगे इस सिंगल कि मतलब चैनल इस तरह की फॉर चेंज आपस में मेल के हीमोग्लोबिन वाली बनाएंगे ट्यून हमने बनाए हीमोग्लोबिन चेंज यह कितनी किस रंग की होती है यह अल्फा बीटा गामा डेल्टा ज़िंटा एंड एक्स लोन यह एक संयोग तो यह श्योर ठीक है अगर आप सब्सक्राइब करें और थोड़ा सा आटा पूरा नाम किसके हैं है कि अब यह हमारे पास हीमोग्लोबिन चेंज और यह लोग रोमांस चेंज आपस में मुक्तिपथ कोंबिनेशन स्मेल के डिफरेंट टाइप्स आफ हीमोग्लोबिन मॉलिक्यूल बनाते अगर आप से पूछें कि लोग टाइप फिर हमारे पास कौन सी टीम में मौजूदगी से चीन की राजधानी दिल्ली में तीन भेद हैं मैं अभी तारों में से सबसे इंपोर्टेंट को सीईओ बने एक क्योंकि एड्रेस में 1975 जाती है आपके अंदर शुगर 27-वर्षीय ग्रामीण युवकों ने है सिर्फ टू पोस्टेड हीमोग्लोबिन ए टू है ₹1 रंगे 2% एंड हीमोग्लोबिन एसएस 1% फेस पर पी टैग मतलब यह विडंबना तो यह तो फीचर्स चाहिए तो यह हीमोग्लोबिन आफ बेसिकली स्पीड लाइफ में होती है मगर एडवांस में भी है वन परसेंट क्रीम और 1% पाई जाती है ठीक है अब मैं आपको गया है कि मतलब मॉलिक्यूल फॉर हीमोग्लोबिन चेंज के मिलने से बनता है तो अच्छा जी हां जरूर गलत हैं जिन्हें डिजाइन प्रश्न यह है वह स्कैन फॉर हीमोग्लोबिन चेंज के मिलने से बनेगा तो टू अल्फा चेंज दुश्मन दुश्मन बीटा चेंज के - बनेगा अब इसे कलर्स में फोकस करना है तो आपको याद करने में शांति Dual सा चेंज अप टू बीटा चेंज अल्फाबेट आफ थिस वे की सबसे बड़ी मतलब यह कैसे बनेगी अभी तक टेबल 2018 है मतलब एक हिस्सा बनेगी टू हेल्प फॉर चेंज एंड टू डेल्टा चीन 242 डेवलपर्स और मतलब स्किन में चेंजेस और टू माय चैनल के मिलने पहुंचे इसके साथ-साथ पर कहां पर आता है एक में बीता है एक डाइट है इसमें हम डालते हैं है अब हम आ जाते हैं एंब्रियोनिक तेज में कौन सी हिमोग्लोबिन होती है और पीटर लाइफ में कौन सी मुद्रा बनती है तो वहीं एंब्रियो क्या होता है एग्जेक्ट और पीटल फीचर्स क्या होता है कि इन थे ऑयल थिस पोस्ट 027 तक हम कहते हैं थे फर्स्ट 07 विक्की डिवैलप यौवन के वक्त जैन एवं एंब्रियो कहते हैं जो कि आठवीं के बाद उसे फीचर्स कहते हैं ठीक है तो एंब्रीयॉनिक लाइफ में वापस दो तरह की मतलब है गांव अव्वल 2512 [संगीत] Z2 प्लस लॉन्च के पहले ही न किसी और सिर्फ चेंज और यह कि गोवा टू के दिन वहीं टूलबार चेंज है और गारंटेड 2G नेट यूज भी और घ्र कमिटमेंट फ्रेंड फैसलों और यह उपलब्धि बहुत में उपलब्घता चेंज इन टू यह जामा मस्जिद अगर शाहरुख को पूरा करने के लिए तो सोरों में ही टू वैल्यू चेंज कॉमन है सिर्फ व्वे 2K और दूसरा 18 इस प्रयोग से पीड़ित है मगर इंफोर्मेशन बहुत अपॉइंट वाले यह उसमें बहुत वह चुके इसके राइट डाउन के गिफिन मतलब इन्फॉर्मेशन युवा इवेंट आप गुर्जर पूछी जाती हैं नेक्स्ट9 है कि इवेंट स्टेप 930 इन सब चीजों को ब्लडी मैरी यह तो मैं आपको नार्मल टाइप हीमोग्लोबिन बताई थी क्या क्लब नार्मल टाइप की प्रॉब्लम हुई अब नार्मल टेस्टिंग में कौन सी होती है के प्लेयर एम एस कर दिया कि मतलब मैंने ही मतलब इमेज इन हिमोग्लोबिन थर्ड है मतलब इनेक्टिव ब्राउन ए डे बनी मतलब ऐसी मतलब क्वेश्चन मतलब अब हीमोग्लोबिन ऐड किया है भई एप्स फ्रॉम सिकल हीमोग्लोबिन टेस्ट कहां पाई जाती है यह सिकल सेल एनीमिया में पाई जाती है यह कौन सा नहीं आ गया लेकिन इनमें से करने से कब्ज लेनी है उसमें क्या होता है कि बेसिकली जो रेड ब्लड सेल में वह सर्कल शेप बांधते हैं अच्छा यह सीक्वेंस अफ्रीका बेसिक कंसेप्ट यह होती है कि नॉर्मल आर्मी जैसे जाते हैं नॉर्मल आरबीसीस आर इन बाइकिनी बेब्स मगर यह तो वह इस शक्ल बना हुआ है अभी शक्ल के होती है साइकिल होती है आपने दिशा की तरफ थी जिसे हम छोटे-छोटे पौधों को काटते हैं यह मिट्टी को सही करते हैं ठीक है तो दिस सिग्नल कि अब इस तरह का आरबीसी बन जाता है आप इस तरह कार्तिक ह्यूमन है वह हम देख लेते हैं कि सिकल सेल ए नियर जालंधर को उसके आरवीसी सेंटर हिमोग्लोबिन गैस सकती है अब मोगरे नॉर्मल ही मतलब यह है मतलब यही है कि इस मिट्टी ऑयल चेंज और टू बेटा चैनल सबस्क्राइब नहीं किया मगर इसमें तो टू बेटा छोटा सा इसका मतलब बेटा मसला तो जो टू बीटा चेंज जो है ना कि उनके अंदर जो बन पाएंगे जो पॉलिपेप्टाइड चैंपियन के जो योग मोमेंट चैन का बेटा चेंज जो प्रोटीन पार्ट फाल उसके अंदर बीटा चीन के अंदर अमाइनो एसिड यू टेल मि के बाहर निकल जाता है और मुसलमान विटामिन की जगह अमाइनो एसिड फैलिन आ जाता है यानि कि नॉर्मली बेटा चैन में जिस जगह पर जबकि चैन उधोग वीवर लोगों के मन से न तो बैंक से अधिक पॉलिपेप्टाइड दैनिक बहुत सारे - मृत्यु बनाएगी कि तो वहां पर बीटा चेंज कि मैं यहां पर Amazing लगा हुआ था ना उल्लू टाइमिड को वैली एंड रिप्लेस का व्यवहार जाता है वहीं चली का खेला जाता है और जब यह तरफ इशारा को ठीक है मगर उसमें 1 मसला जाता है को स्टार्ट मुझसे अपसेट है मगर जैसे ही वह आरबीसी को हम लो ऑक्सीजन का सामना करना पड़ेगा उस आरबीसी को इसके अंदर ही मुश्किल है ऐसा है ठीक है तो क्या होगा उस आरबीसी के अंदर थे लोंगिट्यूड क्रिस्टल्स बनना शुरू हो जाएंगे नंबर में क्रिस्टल-क्रिस्टल होना शुरू हो जाएंगे ही मतलब गया कि जब वह क्रिस्टल्स बनेंगे तो क्या करने मंदिर परिसर में बांध बनकर पूरे आरबीसी की शेप को ऐसे कर देंगे पहले यह सफल नहीं हुए अलार्म विभिन्न और मुकेश भाई उनके व्यक्ति मुनार बीच को जैसी लो ऑक्सीजन के साथ अ का सामना करना पड़ा था वह चीज इन थिस स्टेट में आता ही उस आरबीसी के अंदर क्रिस्टल्स बढ़ना शुरू हो जाते हैं और वह क्रिस्टल इस तरह लंबे बन जाते के पूरा आरबीसी की सेव कैसे बनाते हैं अब आपको याद कार्तिक की बहन के डिस्क इसलिए थी क्योंकि वह किसी भी छोटी सी जगह से ऐसे करके इसको यूज करके गुजर जाए ठीक है अब इस क्षेत्र है यह तो इस तरह इसको यूज करके नहीं उतर सका यह ठीक है जब यह स्क्रीन सेटिंग उस आएगा तो यह स्मॉल कैप इसके अंदर रखे स्टार्ट हो जाएगा यह स्टेप इसके अंदर स्टॉक हो जाएगा तो उस क्योंकि प्लीज लाइक जरूर जाएगी मौजूदा स्टॉक होता है तो फिर वहां पर आप शेयर हो जाता है और मैक्रोफेजेस के सीख जाते हैं यानी कि पूरा एबीसी ही मलाई हो गया था बीसी टूट गया बार्बी टूट गया है तो फिर हर चीज की लिस्ट जिम जाना हो गई है और आर्मी कर्नल लेवल कम हो गया वैसे के लिए नहीं है अच्छा ठीक है बेटा चेयरमैनशिप का मसला हुआ लोग सीजन कप उपवास करते ही कृष्ण और उनके क्रिस्टल की वास है सीक्वेंस एल्बम कैसे विकसित चार्ल्स डिकेंस मोबाइल एप्लिकेशन में फंस गए वहां से ऊपर औपचारिक तौर पर निकाल लिया दर्ज नहीं हुआ था अ क्वेश्चन अ पीसीज और इस नंबर ऑफ एसबी ठीक है अब यहां पर गौर से देखो कि लोग ऑफ सीजन में क्या कराया पैसे के लिए लॉग ये क्रिस्पी बने है जिसकी वजह से वह कैपिलरी में स्टॉक हो गए और जब भी किसी झिझक के कैपिलरी में आरबीसी स्टॉक आया तो वह बींस्टॉक होने की वजह से ब्लड सप्लाई टो करेगा और वह ब्लड सप्लाई कम हो निवासियों के बीच ऑप्शनल और मस्जिद गिर जाएगा दैनिक इलेक्ट्रिफिकेशन ने कैपिलरीज को ब्लॉक कराया और उनका क्लिक उन ब्लॉक कैपिलरीज ने मस्जिद लोगों की जिन गुर्जरों करा दी यहां पर लाइट नहीं आ रहा तो यह एक विषय साइकिल बन गया है लोगों की जान इसको बनाया उसने वापस लेकर तो यह तो ऐसे घूम घूम के है तो यह तो उस जगह पर जो प्रॉब्लम है वह तो बड़े ही जाएगी उदयगिरि और जब कोई भी चीज के पॉजिटिव फीडबैक साइकिल के अंदर फस जाए तो वह तो काम रुका नहीं है बड़ी जाता है और इस पेस्ट को हम कहते हैं सिकलसेल क्राइसिस है कि मतलब है छोटा-सा हुआ था कि बिल्कुल ऑप्शन हुई थी थोड़ी थोड़ी सी स्माइल प्लीज लाइक मगर उनकी वसीयत नहीं और मजबूत वापसी होगी कि मस्जिद कैफे शुरू होगी और काम बढ़ता जा रहे हैं कि सिकलसेल क्राइसिस जो फैसला होगी तो जिस्म जिस जगह फ्री होंगी मां फ्लैश लाइट कम हो जाएगी मसाले कोई तुम असल में इन होनी शुरू हो जाएगी सूर्य किस्म की तीनों काम करना बंद कर देगा डालना शुरू हो जाएगा ठीक है तो इसलिए सिकल सेल प्राइस प्राइस एस्टेट इसको फौरन हल करना पड़ता कि स्नान करेंगे सारी बातें अश्लील ठीक वैसे जो लोग बिजनेस को सेट करेंगे इसमें एक ही प्लॉट जाएंगे मगर रोडवेज हाइड्रेशन करेंगे ठीक है तो यह आपके है इससे फ्लाइंग यहां पर कवर हो चुका आपका अ अच्छा ठीक है अच्छा इसके अलावा मार्फत दूसरी म्यूट करो वॉल्यूम व्हाट्सएप यह दोनों होती है apk फाइल सी दुनिया में हो अच्छा ठीक है जिसका नाम है अल्फा सेल्सियम है कि अब एल्फा थैलेसीमिया में क्या है कि बेसिकली l 5 इंच का मतलब यानि कि इसमें अल्फा चेंज जो नार्मल एकजुट करने चाहिए 2012 बीटा चेंज इन ए स्मॉल विलेज वैल्यू चेंज ही एकजुट नहीं करती जो वैल्यू चेंज नहीं करेंगे तो उनकी जगह या तो बेटा जेनेलिया या फिर यहां से लिए तो अल फतह सीनियर में क्या होता है थ्री में काफी टाइप से ब्रदर ठीक है अगर यह आप सिर्फ योग ब्रेंबल डिसबैलेंस हम इनको इनका जिक्र करेंगे ऐसा सैलरी में अल्फा चेंज कर मसला f5 यौन हो गया तो उसकी जगह मस्जिद दो बेटा जिसने लिए सिर्फ और विटामिंस होगी तो आदत और बेटा चेंज हो जाए तो हम कहते हैं मतलब है एड्स फॉर बेटा चेंज अगर किसी ने Bigg Boss अगर हम मिलने इंच और अगर फॉरगेट माय चैनल हम कहते हैं मतलब मैन पार्ट है कि चीन को गया तो यह थी हमारी इसे कपिल धनियां योग बने इस बांध घुंघरू व्हाट्सएप
189310
https://www.eso.org/public/outreach/eduoff/seaspace/navigation/navgps/navgps-3.html
Sea and Space / Navigation / GPS How Does GPS Work? The triangulation method Now that you know the main principles of GPS, you may wish to learn more about the fundamental physics and mathematics behind GPS. Although it is not possible to go into great detail - the entire system is quite complex and many different aspects must be taken into account - here are some of the underlying facts. The basic principle inherent in GPS is to determine with the best possible accuracy a point in space, as defined by three coordinates, here geographical latitude and longitude, as well as elevation above sea level. For sailors, the elevation is not relevant! This is done by means of triangulation, that is measurement of triangles. In practice, this involves determining the distances to at least three GPS satellites from the user's GPS receiver. The positions of the satellites in space are known all the time by means of various observational methods and orbital computational methods. When one distance is known, the user must be located on the surface of a sphere with the satellite at the centre and with a radius equal to this distance. With two distances, the location must be on a circle that represents the intersection betwen the two spheres. With three distances known, two points are possible of which one will be far out in space and can be eliminated. Thus, the point in space has been determined. Timing problems However, there are timing problems to be overcome, before this method will work. In particular, how is the distance to the satellite determined with the desirable accuracy? In theory, this is done by measuring the arrival time of the signal from the GPS satellite. This signal carries timing information from the atomic clock on-board the satellite and the measured time delay thus indicates the distance (multiplying the time delay by the speed of light gives the distance). For this, the GPS receiver must also have an internal clock. However, it is not possible to install a high-precision atomic clock in a small hand-held GPS receiver! It is unavoidable that the precision of the clock in the receiver is much less than that of the atomic clocks in the satellites. The receiver clock may be some fractions of a second off. But how can the time delays then be measured? The trick lies in the fact that the time offset of the clock in the GPS receiver is considered as the fourth unknown (the first three being the three space coordinates of the receiver). In the first approximation, the offset is considered to be zero. Then, if a fourth satellite signal is received and a fourth distance is measured, it will also be possible to determine with high precision this time offset and then to find the correct space coordinates. Said in other words, the four distances to the four satellites will only fit and determine one particular point in space, if the time offset has a certain value. This calculation is done automatically by the software in the GPS receiver. This is the reason that the acquisition of three GPS satellites does not give a very high precision, and that at least four are needed for a satisfactory measurement. The satellite signals The digital signals from the GPS satellites are emitted at two frequencies (1228 and 1575 MHz). They are received by the GPS receiver and contain much detailed information. In addition to the timing signal, there are also data for identification of the satellite (by its number), about the status of the satellite clock, the satellite orbit, the current status of the satellite (health) and various correction data. The data is divided into frames of 1500 bits; one frame is transmitted in about 30 seconds. These data are stored in the receiver and updated regularly. The approximate directions and distances to individual GPS satellites that are momentarily above the horizon are calculated from the orbital data. Uncertainties There are of course many other uncertainties involved in a GPS measurement. For instance, the positions of the satellites are only known with a certain accuracy, the signals from the satellites are delayed in the ionosphere, background noise is introduced into the signal and may render part of it undecipherable, there may be reflections from the surrounding elements (houses, trees), etc. [GIF, 130k] Where are we? All of this means that a position measurement can only reach a certain maximal accuracy. In practice, under the best of circumstances, this will be about +- 15 metres for civil - still not bad at all! Exercises: The GPS satellites move in near-circular circumterrestrial orbits with radii near 26,000 km. The period is around 12 hours. What is the speed in the orbit? Assuming that the distance between a GPS satellite and the GPS receiver is 24,000 km. What is the time delay that would be measured? (The speed of light is 300,000 km/sec) The best clocks on-board GPS satellites are accurate to about 1:10-14 (short-term stability). How long (in years) would it take for such a clock to be off by 1 second? What kind of timing accuracy corresponds to a position uncertainty of 15 metres. The circumference of the Earth is 40,000 km and corresponds to 360°. Which angle (in arcseconds) corresponds to 15 metres? | | | | --- | | Top page | |
189311
https://www.sciencedirect.com/science/article/abs/pii/S0035378711004048
Neurological manifestations of Behçet's disease: Evaluation of 40 patients treated by cyclophosphamide - ScienceDirect Skip to main contentSkip to article Journals & Books Access throughyour organization Purchase PDF Patient Access Other access options Search ScienceDirect Article preview Abstract Introduction Section snippets References (15) Cited by (32) Revue Neurologique Volume 168, Issue 4, April 2012, Pages 344-349 Original article Neurological manifestations of Behçet's disease: Evaluation of 40 patients treated by cyclophosphamide Les manifestations neurologiques de la maladie de Behçet: étude de 40 patients traités par cyclophosphamide Author links open overlay panel E.H.Ait Ben Haddou a 1, F.Imounan a 1, W.Regragui a, O.Mouti b, N.Benchakroune e, R.Abouqal c d, A.Benomar a c d, M.Yahyaoui a Show more Add to Mendeley Share Cite rights and content Abstract Introduction Neurological manifestations in Behçet's disease represent between 4 to 49% of systemic manifestations and remain, in the long term, the leading cause of morbidity and mortality. Methods Retrospective series of 40 severe Neurobehçet cases fulfilling the International Study Group criteria for Behçet's disease were consecutively recruited over a period from June 2004 to December 2010. All patients had clinical and ophthalmologic examinations; they underwent laboratory and imaging investigations. They received corticosteroids and cyclophosphamide as initial bolus of 600 mg/m 2 of BSA in the 1st, 2nd, 4th, 6th and 8th day followed by a bolus of 600 mg/m 2 BSA every 2 months for 2 years. Antithrombotic therapy was given to patients with cerebral deep venous thrombosis. Patient follow-up and tolerance to treatment were analyzed. Results The average age at diagnosis was 34±13 years, with a sex-ratio of 1.78. The clinical presentation was dominated by the meningoencephalitis in 48.8% of cases, cerebral deep venous thrombosis in 43.6% of cases and myelopathy in 7.7% of cases. The 40 patients receiving cyclophosphamide bolus, despite two aggravated cases, evolved positively with clinical improvement and good tolerance. Conclusion The demographic and clinical aspects of our series are similar to those reported in the literature. In contrast to previously reported cases of a poor prognosis in severe neurobehçet's disease, our study suggests that immediate and aggressive treatment by cyclophosphamide may ameliorate the prognosis. However, a multicenter study is needed to confirm the possible efficacy of cyclophosphamide and further assess the long-term tolerance. Résumé Introduction Les manifestations neurologiques de la maladie de Behçet représentent entre 4 à 49% des manifestations et restent la principale cause de morbimortalité. Méthodes Quarante cas de Neurobehçet remplissant les critères du groupe international d’étude pour la maladie de Behçet, ont été recrutés consécutivement sur une période de juin 2004 au décembre 2010. Les patients ont reçu des corticoïdes et 600 mg/kg par m 2 par SC de cyclophosphamide intraveineux tous les deux mois pendant deux ans. Les anticoagulants ont été administrés en cas de thrombose veineuse cérébrale. Le suivi et la tolérance au traitement ont été analysés. Résultats L’âge moyen était de 34±13 ans, avec un sex-ratio de 1,78. Le tableau clinique était dominé par la méningo-encéphalite (48,8%), la thrombose veineuse cérébrale profonde (43,6%) et la myélopathie (7,7%). Les 40 patients ayant reçu un bolus de cyclophosphamide, excepté deux cas aggravés, ont évolué positivement sous traitement avec une amélioration clinique et une bonne tolérance. Conclusion Contrairement à la littérature, notre étude suggère que le traitement immédiat et agressif dans les formes sévères par cyclophosphamide peut améliorer le pronostic. Cependant, une étude multicentrique est nécessaire pour confirmer l’efficacité possible de cyclophosphamide et mieux évaluer la tolérance à long terme. Introduction Behçet's disease (BD) is a multisystem relapsing inflammatory disorder of unknown cause. Neurologic involvement in Behçet (NB) disease was first reported by Knapp in 1941 (Araji and Desmond, 2009). The first paper which described the pathology of neurologic manifestations of Behçet disease was reported in 1944 (Araji and Desmond, 2009). The neurological involvement in BD is either caused by primary neural parenchymal lesions (neuro-Behçet's syndrome [NB]) or is secondary to major vascular involvement (vasculo-Behçet's syndrome [VB]) (Araji and Desmond, 2009). It is one of the most serious causes of long-term morbidity and mortality in BD. Although BD is rare in neurological practice in most countries, it is commonly mentioned in the differential diagnosis of inflammatory or demyelinising central nervous system diseases. Neurological manifestations represented between 4 to 49% of manifestations of BD (Haghighi et al., 2005). The most common manifestation of NB consists of different combinations of cranial nerve palsy, dysarthria, unilateral or bilateral pyramidal tract signs, ataxia with or without consciousness disturbance (Borhani, 2009). Less common central nervous system manifestation includes hemiparesis, cognitive–behavioral changes, emotional changes, extrapyramidal signs and seizures. At the time of a neurologic attack, typical neuroradiologic findings have been defined. Polymorphonuclear pleocytosis and/or absence of IgG oligoclonal bands are suggestive of NBD. The presence of worse prognostic factors should be considered when treatment is initiated. The presence of abnormal cerebrospinal fluid (CSF) and parenchymal involvement, especially of the brainstem, justifies more aggressive treatment (Bowirrat and Radi, 2010). Some NBD patients have an insidious onset of the disease with primary progressive central nervous system dysfunction, and others may display symptoms attributable to intracranial hypertension associated with dural venous sinus thrombosis (Bowirrat and Radi, 2010). In most cases of NB, corticosteroids should be given as infusion of intravenous methyl prednisolone followed by a slowly tapering course of oral steroids (Araji and Desmond, 2009). Immunosuppressive agents should be used in cases of severe form of NB such as meningoencephalitis, a myelopathy and cerebellar deep venous system thrombosis. In most series, the most used immusupressant were azathioprine and methotrexate (Ghayad et al., 2009). Cyclophosphamide (CPM) is only reported in only sporadic cases, and used in second intention (Borhani, 2009). This study was conducted to describe the clinical and prognostic aspects of neurologic involvement in BD among patients and evaluate tolerability of CPM in NB patients during 24 months. Access through your organization Check access to the full text by signing in through your organization. Access through your organization Section snippets Data collection During 5 years from June 2004 to December 2010, 40 severe forms of NB cases satisfied the diagnostic criteria of the international study group of BD (International Study Group for Behcet's disease, 1990), managing in our neurologic and neurogenetics department of Rabat Hospital (Morocco), were retrospectively and consecutively recruited and followed over the period of 24 months. Twenty-four males and 16 females were included in our study. The age ranged between 14 and 69 years old. The time Demographic and clinical characteristics of the population study Our study is a retrospective series of 40 NB cases. The average age at diagnosis was 34±13 years. The age at onset was between 14 and 69 years (mean 32.6). There was a slight male predominance (62% versus 38%) with a sex-ratio=1.78. All cases had oral aphtae, 30 cases had genital ulcerations which representing 75% of the cases. The skin pathergy test was positive in 75% of the patients tested. The clinical syndrome was dominated by meningoencephalitis in 48.8% of cases, cerebral deep venous Discussion In the present study, we were interested in clinical and evolutionary aspects analyzing of Neurobehçet's disease and the evaluate tolerability of CPM in the treatment of neurological manifestations in BD. Araji and Desmond reported that the BD sex-ratio was of 2.8 more in men than in women (Araji and Desmond, 2009), which were obtained from our series. The average age of our patients was 34 years which matching to the age of onset of NB (Araji and Desmond, 2009). The diagnosis of neurological Conclusion The demographic and clinical aspects of our series are similar to those of the literature. In contrast to previous reports of a poor prognosis in NBD, our study suggests that immediate and aggressive treatment with CPM therapy may ameliorate the prognosis in patients with mild to moderate NBD by increasing survival and decreasing the disability and mortality in this patient series. However, a multicenter study is needed to confirm the possible efficacy of CPM and further assess the long-term Disclosure of interest The authors declare that they have no conflicts of interest concerning this article. Acknowledgments The authors would like to acknowledge Professor Az eddine Ibrahimi, professor of biotechnology at the SUNY Stony-Brook University in New York (USA), for his invaluable assistance, his comments and his diligent help in writing this article. Recommended articles References (15) V. Hamuryudan et al. Other forms of vasculitis and pseudovasculitis Bailliers Clin Rheumatol (1997) T.C. Andrews et al. Low-dose intravenous cyclophosphamide therapy in a patient with neurological complications of Behçet's disease Clin Rheumatol (2007) A. Araji et al. Neuro-Behçet's disease: epidemiology, clinical characteristics and management Lancet Neurol (2009) I. Bank et al. Dural sinus thrombosis in Behçet's disease Arthritis Rheum (1984) R.M. Boone et al. Thalidomide in the treatment of neuro-Behcet's syndrome Br J Dermatol (1986) H.A. Borhani Treatment of neuro-Behçet's disease: an update Expert Rev Neurother (2009) A. Bowirrat et al. Neuro-Behçet's disease: a report of 16 patients Neuropsychiatr Dis Treatment (2010) There are more references available in the full text version of this article. Cited by (32) Tocilizumab for severe refractory neuro-Behçet: Three cases IL-6 blockade in neuro-Behçet 2015, Seminars in Arthritis and Rheumatism Citation Excerpt : Typically, NBD presents as meningoencephalitis mainly involving the brainstem, but the spectrum of possible neurological manifestations is broad . Most patients respond to glucocorticoids (GC), cyclophosphamide , or TNF-α inhibitors such as infliximab, etanercept, or adalimumab [7,8], but a few are highly refractory . Herein, we present three patients with resistant NBD that responded to IL-6 blockade. Show abstract To describe the response to IL-6 blockade [tocilizumab (TCZ)] in three patients affected by highly refractory neuro-Behçet disease (NBD). Three patients who had failed synthetic immunosuppressants and TNF-α antagonists combined with glucocorticoids received TCZ after obtaining their informed consent. Two patients underwent TCZ infusions at 8 mg/kg every 4 weeks for a mean period of 24 months, while in one patient, the frequency of TCZ infusions was increased to every other week after 21 months due to a disease flare. Concomitant therapy with synthetic agents and low-to-medium dose glucocorticoids was continued. Clinical and imaging findings were assessed before and after the onset of TCZ therapy. In all our patients, a very short time lag between the onset of treatment with TCZ and the clinical response was observed. A partial response occurred in two patients and a nearly complete response in one. Some loss of efficacy occurred after 18 months in one patient, but there was again a significant improvement when the interval between the infusions was shortened. TCZ was overall well tolerated and no serious adverse events occurred. In two patients, the prednisone dose could successfully be tapered to about 20 mg/day, while in another patient glucocorticoids could safely be withdrawn. Brain MRI remained virtually unchanged in all patients. Although TCZ has not yet been included among the medications recommended for the treatment of NBD, our data suggest that it may be considered for patients with refractory NBD. ### Diagnosis and management of Neuro-Behçet’s disease: international consensus recommendations 2014, Journal of Neurology ### Update on the therapy of Behçet disease 2014, Therapeutic Advances in Chronic Disease ### Outcome measures used in clinical trials for Behçet syndrome: A systematic review 2014, Journal of Rheumatology ### Differential diagnosis and management of Behçet syndrome 2013, Nature Reviews Rheumatology ### Behcets Syndrome 2012, Drugs View all citing articles on Scopus 1 E. Ait Ben Haddou et F. Imounan ont contribué de façon égale à ce travail. View full text Copyright © 2011 Elsevier Masson SAS. All rights reserved. Recommended articles Role of dimethyl fumarate in oxidative stress of multiple sclerosis: A review Journal of Chromatography B, Volume 1019, 2016, pp. 15-20 Suneetha A., Raja Rajeswari K. ### The hidden face of Wilson's disease Revue Neurologique, Volume 174, Issue 9, 2018, pp. 589-596 F.Woimant, …, A.Poujois ### Demyelination, strokes, and eculizumab: Lessons from the congenital CD59 gene mutations Molecular Immunology, Volume 89, 2017, pp. 69-72 Adi Tabib, …, Dror Mevorach ### Should we be targeting type 1 interferons in antiphospholipid syndrome? Clinical Immunology, Volume 255, 2023, Article 109754 Gabrielle de Mello Santos, …, Fernanda Andrade Orsi ### Decoding Erdheim–Chester Disease: a Pictorial Essay of the Radiologic and Pathologic Findings, and its Main Differential Diagnoses Clinical Radiology, Volume 85, 2025, Article 106873 I.Dixe de Oliveira Santo, …, M.Mathur ### Laparoscopic castration in dogs: Complications, outcomes and long-term follow-up Revue Vétérinaire Clinique, Volume 55, Issue 1, 2020, pp. 1-10 S.Libermann, …, L.Bonneau Show 3 more articles About ScienceDirect Remote access Contact and support Terms and conditions Privacy policy Cookies are used by this site.Cookie settings All content on this site: Copyright © 2025 Elsevier B.V., its licensors, and contributors. All rights are reserved, including those for text and data mining, AI training, and similar technologies. For all open access content, the relevant licensing terms apply. We use cookies that are necessary to make our site work. We may also use additional cookies to analyze, improve, and personalize our content and your digital experience. You can manage your cookie preferences using the “Cookie Settings” link. For more information, see ourCookie Policy Cookie Settings Accept all cookies Cookie Preference Center We use cookies which are necessary to make our site work. We may also use additional cookies to analyse, improve and personalise our content and your digital experience. For more information, see our Cookie Policy and the list of Google Ad-Tech Vendors. You may choose not to allow some types of cookies. However, blocking some types may impact your experience of our site and the services we are able to offer. See the different category headings below to find out more or change your settings. You may also be able to exercise your privacy choices as described in our Privacy Policy Allow all Manage Consent Preferences Strictly Necessary Cookies Always active These cookies are necessary for the website to function and cannot be switched off in our systems. They are usually only set in response to actions made by you which amount to a request for services, such as setting your privacy preferences, logging in or filling in forms. You can set your browser to block or alert you about these cookies, but some parts of the site will not then work. Cookie Details List‎ Performance Cookies [x] Performance Cookies These cookies allow us to count visits and traffic sources so we can measure and improve the performance of our site. They help us to know which pages are the most and least popular and see how visitors move around the site. Cookie Details List‎ Contextual Advertising Cookies [x] Contextual Advertising Cookies These cookies are used for properly showing banner advertisements on our site and associated functions such as limiting the number of times ads are shown to each user. Cookie Details List‎ Cookie List Clear [x] checkbox label label Apply Cancel Consent Leg.Interest [x] checkbox label label [x] checkbox label label [x] checkbox label label Confirm my choices
189312
https://libraryguides.centennialcollege.ca/c.php?g=717168&p=5125562
Skip to Main Content Statistics Toggle Dropdown Why Study Statistics? Descriptive & Inferential Statistics Fundamental Elements of Statistics Quantitative and Qualitative Data Measurement Data Levels Collecting Data Ethics in Statistics Describing Qualitative Data Describing Quantitative Data Histograms Stem-and-Leaf Plots Measures of Central Tendency Measures of Variability Describing Data using the Mean and Standard Deviation Measures of Position Z-Scores Probability Counting Techniques Simple & Compound Events Independent and Dependent Events Mutually Exclusive and Non-Mutually Exclusive Events Factorials Permutations and Combinations Inferential Statistics Toggle Dropdown Normal Distribution Central Limit Theorem Confidence Intervals Determining the Sample Size Hypothesis Testing Hypothesis Testing Process Events What is an event? In probability, the set of outcomes from an experiment is known as an event. For instance, conducting an experiment on tossing a coin. The outcome in this experiment may be head or a tail - whatever takes place each time you toss the coin is the event. There are many different types of events that are applied in different situations. In this section we will be focusing on simpleand compoundevents. Simple & Compound Events A simple event is one that can only happen in one way - in other words, it has a single outcome. If we consider our previous example of tossing a coin: we get one outcome that is a head or a tail. A compoundevent is more complex than a simple event, as it involves the probability of more than one outcome. Another way to view compound events is as a combination of two or more simple events. Consider the probability of finding an even number less than 5. We have a combination of two simple events: finding an even number, and finding a number that is less than 5. EXAMPLE Determine whether these are simple or compound events: a) Getting a number less than 2 or greater than 4 when spinning this spinner once. b) Getting heads when a coin is tossed and getting a 3 when a six-sided number die is rolled. See the video below for the solutions: Probabilities for Simple and Compound Events The probabilityof an event occurring requires two known variables: the number of times the event can occur, and the total number of possible outcomes. We use the following formula to calculate probability: Let’s try some problems! | | | 1. Kyle works at a local music store. The store receives a shipment of new CDs of various genres in a box. In the shipment there are 10 country CDs, 5 rock CDs, 12 hip hop CDs, and 3 jazz CDs. What is the probability that the first CD Kyle chooses from the box will be country? How many Country CDs are there? number of times the event occurs How many CDs could Kyle choose from? total number of possible outcomes What is the probability that Kyle will choose a country CD first? applying the probability formula always reduce answer to lowest terms! | | | | 2. Kyle's store receives a new shipment of CDs in a box. In the shipment, there are 10 country CDs, 12 rock CDs, 5 hip hop CDs, and 3 jazz CDs. What is the probability that Kyle will select a jazz CD from the box, and then, without replacing the CD, select a country CD? This event consists of two simple events. What is the probability of selecting a jazz CD? What is the probability of selecting a country CD without replacingthe jazz CD? What is our new total? So the probability of selecting a country CD What is the probability of the first event taking place, followed by the second event? (Note - Final answer is determined by just doing multiplying two events when both are independent events. We will discuss independent and dependent events later on!) | Statistics by Matthew Cheung. This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License. << Previous: Counting Techniques Next: Independent and Dependent Events >> click to chat Library staff are here to help! contact us
189313
https://www.nagwa.com/en/videos/627182538953/
Question Video: Geometric Interpretation of the Roots of Unity Mathematics • Third Year of Secondary School 1) Find all the solutions to 𝑧⁶ = 1. 2) By plotting the solutions on an Argand diagram, or otherwise, describe the geometric properties of the solutions of 𝑧⁶ = 1. Video Transcript 1) Find all the solutions to 𝑧 to the power of six equals one. 2) By plotting the solutions on an Argand diagram, or otherwise, describe the geometric properties of the solutions of 𝑧 to the power of six equals one. We could solve this equation by finding the sixth root of both sides. However, we know that there are going to be six solutions to this equation. So we need to consider an alternative method. Instead, we rearrange by subtracting one from both sides. And we see that 𝑧 to the power of six minus one equals zero. This is actually a special case of the difference of two squares, meaning we can write the expression on the left-hand side as 𝑧 cubed minus one multiplied by 𝑧 cubed plus one. And now, we have two numbers whose product is zero. This can only be the case if 𝑖, the number itself, is equal to zero. Let’s start with saying that 𝑧 cubed minus one is equal to zero. We can observe that one of the solutions to this equation is one since one cubed minus one is indeed zero. This means that 𝑧 minus one must be a factor of 𝑧 cubed minus one. We could use polynomial long division to find the other factor. Or we could say that this means that 𝑧 cubed minus one is equal to 𝑧 minus one multiplied by some quadratic. And then, we can equate coefficients of 𝑧. Distributing the brackets, and we see that 𝑎𝑧 cubed plus 𝑏 minus 𝑎 𝑧 squared plus 𝑐 minus 𝑏 𝑧 minus 𝑐 equals 𝑧 cubed minus one. Equating coefficients of 𝑧 cubed, we see that 𝑎 is equal to one. And that’s because the coefficient of 𝑧 cubed on the right-hand side is just one. The coefficient of 𝑧 squared on the right-hand side is zero. So we see that when we equate coefficients of 𝑧 squared, we get 𝑏 minus 𝑎 equals zero. 𝑎 is of course one. So 𝑏 minus one is zero, which means that 𝑏 must be equal to one. We’re going to skip equating coefficients of 𝑧 to the power of one and go straight to equating constants or coefficients of 𝑧 to the power of zero. We see that negative 𝑐 equals negative one, which means that 𝑐 is equal to one. And this means that 𝑧 cubed minus one is equal to 𝑧 minus one multiplied by 𝑧 squared plus 𝑧 plus one. We then solve 𝑧 squared plus 𝑧 plus one equals zero by either using the quadratic formula or completing the square. If we use the quadratic formula, we see that 𝑧 is equal to negative one plus or minus the square root of one squared minus four times one times one, all over two times one. That’s negative one plus or minus the square root of negative three over two. We’ll split this up and write it as negative one-half plus or minus the square root of negative three over two. And since the square root of negative one is 𝑖, our solutions for 𝑧 become negative a half plus or minus the square root of three over two 𝑖. We’ll repeat this process for 𝑧 cubed plus one is equal to zero. This time, we can spot that one of the solutions to this equation is 𝑧 equals negative one. And that’s because negative one cubed plus one is equal to zero. This time, that means that 𝑧 plus one must be a factor of 𝑧 cubed plus one. And we can say that we can write 𝑧 cubed plus one as 𝑧 plus one multiplied by some quadratic in 𝑧. This time, distributing the brackets, and we see that 𝑎𝑧 cubed plus 𝑎 plus 𝑏 𝑧 squared plus 𝑏 plus 𝑐 𝑧 plus 𝑐 equals 𝑧 cubed plus one. And this time, when we equate coefficients, we get that 𝑎 is equal to one. 𝑏 is equal to negative one. And 𝑐 is equal to one. So 𝑧 cubed plus one is equal to 𝑧 plus one multiplied by 𝑧 squared minus 𝑧 plus one. This time, we solve 𝑧 squared minus 𝑧 plus one equals zero, once again using the quadratic formula or possibly completing the square. And when we do, we can see that 𝑧 is equal to one-half plus or minus the square root of three over two 𝑖. And we see that we now have the six solutions to the equation 𝑧 to the power of six equals one that we were looking for. And if we want to, we could check these solutions by substituting them back into the equation 𝑧 to the power of six equals one and checking that our answers make sense. For part 2), we’re going to plot these points on an Argand diagram. 𝑧 equals one and 𝑧 equals negative one are fairly straightforward. We have the point a half, root three over two representing the solution one-half plus root three over two 𝑖. And we have negative a half root three over two, representing the solution negative a half plus root three over two 𝑖. We can plot the other two solutions as shown. And what about the geometric properties? Well, we can see that these complex numbers are evenly spaced about the origin. In fact, the solutions are the vertices of a regular hexagon inscribed with a unit circle whose centre is the origin. Lesson Menu Join Nagwa Classes Attend live sessions on Nagwa Classes to boost your learning with guidance and advice from an expert teacher! Nagwa is an educational technology startup aiming to help teachers teach and students learn. Company Content Copyright © 2025 Nagwa All Rights Reserved Nagwa uses cookies to ensure you get the best experience on our website. Learn more about our Privacy Policy
189314
https://blog.wolframalpha.com/2011/04/25/algebraic-simplification-simplifying-expressions-in-wolframalpha/
Algebraic Simplification: Simplifying Expressions in Wolfram|Alpha—Wolfram|Alpha Blog HOMEABOUTPRODUCTSBUSINESSRESOURCES The Wolfram|Alpha Blog is now part of the Wolfram Blog. Join us there for the latest on Wolfram|Alpha and other Wolfram offerings » Algebraic Simplification: Simplifying Expressions in Wolfram|Alpha April 25, 2011 — Sam Blake Comments Off Wolfram|Alpha is written in Mathematica, which as its name suggests is a fantastic system for doing mathematics. Strong algorithms for algebraic simplification have always been a central feature of computer algebra systems, so it should come as no surprise to know that Mathematica excels at simplifying algebraic expressions. The main two commands for simplifying an expression in Mathematica are Simplify and FullSimplify. There are also many specific commands for expressing an algebraic expression in some form. For example, if you want to expand a product of linear polynomials, Expand is the appropriate function. The good news is that everyone has access to the power of Mathematica‘s simplification and algebraic manipulation commands in Wolfram|Alpha. We will now outline some of these features in Wolfram|Alpha, starting with the expression: and we will use Wolfram|Alpha to break it down to something significantly much simpler. The expression simplifies to zero (excluding |x| = 3). Let’s take a closer look at this simplification. First, the denominators. We will expand the polynomial (x – 3)(x + 3) and see that it is equal to the other denominator. So we have a common denominator: Now we can simplify the numerator: We see that it is zero. Let’s look at some other examples. We can express sin(n x) in terms of a polynomial in sin(x) and cos(x) by asking Wolfram|Alpha: We can factor polynomials over their splitting fields: Form a single fraction from a number of terms: Express a single fraction in partial fraction form: And solve equations: We now have implemented a new simplification program for Wolfram|Alpha, which allows Wolfram|Alpha to find even more alternate and simplified forms for algebraic expressions. Here are a couple of examples. In the near future, we will have the functionality to show the steps used to derive an algebraic simplification. Stay tuned to this blog for more details! 9 Comments Soon, the wolfram alpha will be better than Mathematica itself Posted by Bartek April 25, 2011 at 12:58 pm That is an extravagantly helpful new development. Great work, W|A team. Loud applause from my corner! Posted by Dave Busey April 26, 2011 at 8:05 pm W|A must be a Scientific Computational Engine. Wolfram Alpha from the start has been distinguished from search engines in that its answers are to be reliable. This was to arise because its data, including algorithms, was to be curated. It was not said explicitly but the only ssuitable criteria I am aware of are those applied to scientific work. One of the criteria for scientific work is that it is repeatable by other scientists. Thus all W|A output should be repeatable by the user. This requires that users have the option to see all or any of the sources of the data and the logical path leading to W|A’s conclusion. Posted by Brian Gilbert April 27, 2011 at 1:35 am W|A team excellent job! Please, when posible try to make this web-blog more smartphone friendly. Posted by Mauricius GV April 27, 2011 at 2:40 pm If the number of bracket pairs in an expression exceeds 2 or 3 then it become difficult to sort and pair them visually. What can be done to facilitate that? Each pair should have its own color (but colors too are often not distinguishable enough) or a tiny but readable ordinal number or symbol below it. Each pair gets own symbol or ord. number. An example with 9 pairs: ((a/bc(dxe(f)gh))^i+(jk))lm+n)o(pg(r(s-t(uv))) How quickly can you pair them? Posted by Vasudev Godbole April 29, 2011 at 8:00 pm W|A neededs help with solving Elimination standard from lol Posted by austin May 2, 2011 at 4:21 pm systems of linear Equations Posted by austin May 2, 2011 at 4:22 pm Has my suggestion (of April 29, 2011) about bracket-pairing difficulties in long algebraic expressions with many bracket pairs been discussed? I think it is an easily soluble problem and will make it easier to interpret or work through algebraic expressions. Posted by Vasudev Godbole August 20, 2011 at 8:20 am Will there the chance to have a step by step the simplification of expression like 6xx^(1/2)+3x^2(1/2)x^(-1/2)? Posted by Chris October 2, 2014 at 1:20 am CATEGORIES Best of Blog Astronomy Chemistry Colors Computational Sciences Culture & Media Dates & Times Earth Sciences Education Food & Nutrition Future Perspectives Health & Medicine Life Sciences Materials Mathematics Money & Finance Music News News Organizations People & History Physics Places & Geography Socioeconomic Data Sports & Games Statistics & Data Analysis Stephen Wolfram Technological World Transportation Weather Web & Computer Systems Wolfram Cloud Wolfram Language Wolfram|Alpha Apps Wolfram|Alpha Widgets BY YEAR 2015 | 2014 | 2013 | 2012 | 2011 | 2010 | 2009 For May 2015 or later, see the Wolfram Blog » About | Pro | Products | Mobile Apps | Business Solutions | For Developers | Resources & Tools Blog | Community | Participate | Contact | Follow: © 2025 Wolfram Alpha LLC—A Wolfram Research Company | Terms | Privacy
189315
https://www.kenhub.com/en/videos/types-of-bones
Video: Types of bones | Kenhub Connection lost. Please refresh the page. Online #1 platform for learning anatomy LoginRegister Success stories Courses Anatomy Basics Upper limb Lower limb Spine and back Thorax Abdomen Pelvis and perineum Head and neck Neuroanatomy Cross sections Radiological anatomy Histology Types of tissues Body systems Physiology Introduction Muscular system Nervous system Articles Anatomy Basics Upper limb Lower limb Spine and back Thorax Abdomen Pelvis and perineum Head and neck Neuroanatomy Cross sections Radiological anatomy Histology Types of tissues Physiology Nervous system Get helpHow to study What's new? Kenhub in... DeutschPortuguêsEspañolFrançais What's new? 4 Get helpHow to study English English Deutsch Português Español Français LoginRegister #1 platform for learning anatomy Courses Anatomy BasicsUpper limbLower limbSpine and backThoraxAbdomenPelvis and perineumHead and neckNeuroanatomyCross sectionsRadiological anatomy Histology Types of tissuesBody systems Physiology IntroductionMuscular systemNervous system Articles Anatomy BasicsUpper limbLower limbSpine and backThoraxAbdomenPelvis and perineumHead and neckNeuroanatomyCross sectionsRadiological anatomy Histology Types of tissues Physiology Nervous system The #1 platform to learn anatomy 6,336,362 users worldwide Exam success since 2011 Serving healthcare students globally Reviewed by medical experts 2,907 articles, quizzes and videos VideosTypes of bones Video: Types of bones Speed: 0.8x | 0.9x | 1x | 1.25x | 1.5x You are watching a preview. Go Premium to access the full video: Types of bones that you find in the human skeleton. Highlights 0:38 Long bones 1:37 Short bones 1:53 Flat bones 2:19 Irregular bones 2:40 Sesamoid bones Related study unit Skeletal system Explore study unit Related articles Bones Read article How many bones can you find in the human body? Read article Related videos Anatomical terminology for healthcare professionals | Episode 3 | Skeletal system [14:32] Anatomical terminology for healthcare professionals | Episode 4 | Muscular system [13:12] Anterior and lateral views of the skull [16:03] Bones of the head and neck [15:40] Bones of the skull [12:46] Bone tissue [12:46] Cells and tissues [24:29] How are muscles named? [11:25] How to learn muscle anatomy [10:55] How to memorize bony landmarks [04:36] Intercostal muscles [02:28] Introduction to the muscular system [26:25] Main bones of the body [24:35] Main joints [19:45] Main muscles of the head and neck [20:24] Main muscles of the trunk [37:30] Main muscles of the upper limb [24:59] Muscles of the ventral trunk [25:20] Parietal bone [02:18] Pectoral muscles [02:19] Rectus abdominis muscle [02:58] Regions of the lower limb [20:39] Scalene muscles [10:01] Skeletal muscle tissue [12:25] Types of body movements [36:00] Types of joints [11:28] Types of synovial joints [18:19] Transcript Hello, and welcome to Kenhub! In today's tutorial, we'll explore the different types of bones in the human skeleton. There are five distinct types of bones, classified by their shape and structure. ... Read more Hello, and welcome to Kenhub! In today's tutorial, we'll explore the different types of bones in the human skeleton. There are five distinct types of bones, classified by their shape and structure. These are: long bones, short bones, flat bones, irregular bones, and sesamoid bones. Here on the right, you can see the femur, a typical example of a long bone. Most long bones have a characteristic structure with a number of common parts. The diaphysis, equivalent to the body or shaft, is tubular with a thick outer layer of compact bone. Contained within it is the medullary cavity, which is bordered by a small amount of spongy bone tissue surrounding the bone marrow. Found at the ends of the long bone are the epiphyses, which are usually wider and more prominent. They consist primarily of spongy bone surrounded by a thinner layer of compact bone. They develop from a center of ossification separate to that of the diaphysis and are partly covered by a layer of articular curtilage where joints form with other bones. Between the diaphysis and epiphysis lies the metaphysis, where longitudinal growth occurs. This growth is driven by the epiphyseal plate, a cartilage layer that ossifies over time, forming the epiphyseal line once growth is complete. Finally, we also have apophyses which are bony outgrowths which usually function as sites for tendon and ligament attachment such as the greater and lesser trochanters of the femur as you can see here. They also develop separately to the rest of the bone before fusing with it later in growth. Long bones are formed through endochondral or indirect ossification. This process involves the transformation of cartilage into bone, rather than intramembranous or direct formation from connective tissue. Key examples of long bones include the humerus, ulna, and radius in the upper limb, and the femur, tibia, and fibula in the lower limb. Additionally, the metacarpal bones, metatarsal bones, and the phalanges of the fingers and toes are all classified as long bones. The clavicle is another important example. Similar to long bones, short bones consist largely of spongy bone encased in a thin layer of compact bone. This type of bones is mainly found in the hands and feet. Examples include the carpal bones and tarsal bones. Flat bones are found in areas subjected to various mechanical forces and often serve to provide protection to soft tissues deep to them. They feature a sandwich-like arrangement with strong outer and inner layers of compact bone, which encase spongy bone and bone marrow. These bones form through an intramembranous or desmal ossification, a process involving the gradual transformation of embryonic connective tissue known as mesenchyme into bone. Examples of flat bones include some of the skull bones, in particular the neurocranium, and also the ribs, the sternum in the thoracic cage, and the scapula. Irregular bones, as their name suggests, vary in shape and structure and therefore do not fit into any other category. They often have a fairly complex shape, which helps protect internal organs. Examples of this group include several bones of the skull like the sphenoid and ethmoid bones as well as the vertebrae and the hip bone. Sesamoid bones are small, round bones embedded within a tendon, typically found in locations where a tendon passes over a joint. These bones function to reduce friction, modify pressure, and enhance the mechanical efficiency of muscle-tendon systems by acting as pulleys. Common examples of this type include the sesamoid bones of the thumb and big toe as well as the patella, which is the largest sesamoid bone in the human body. The patella is embedded in the tendon of the quadriceps femoris muscle, where it serves two main functions: it extends the lever arm of the muscle and reduces the sliding resistance of the tendon. And that's it for today! Thank you for watching and see you next time! Trusted by leading health institutions Our quality commitment Grounded on academic literature and research, validated by experts, and trusted by more than 6 million users. Read more. Diversity and Inclusion Kenhub fosters a safe learning environment through diverse model representation, inclusive terminology and open communication with our users. Read more. Follow us for daily anatomy content Anatomy Basics Upper extremity Lower extremity Spine and back Thorax Abdomen and pelvis Head and neck Neuroanatomy Cross sections Radiological anatomy Physiology Introduction to physiology Muscular system Nervous system Cardiovascular system Lymphatic system and immunity Endocrine system Respiratory system Digestive system Urinary system Acid-base balance Reproductive system Histology General Systems Fetal tissues How to study Printed atlas Anatomy learning strategies Free eBook Labeling diagrams Benefits of Kenhub Success stories More Pricing License illustrations Merchandise About us Team Partners Jobs Contact Imprint Terms Privacy Kenhub in... Deutsch Español Português Français русский 中文 Want to watch the full video? Sign up for free in less than 60 seconds Continue with the full video Master your next anatomy exam Create your free account ➞ Trusted by over 5 million students!
189316
https://scholarspace.manoa.hawaii.edu/bitstreams/6b89593b-dda6-4fb6-9c54-695d0c09d702/download
Language Learning & Technology 202 5, Volume 29, Issue 1 ISSN 1094 -3501 CC BY -NC -ND pp. 1–12 TECHNOLOGY IN PRACTICE FORUM Corresponding Author: Quy Huynh Phu Pham , phamhuynhphuquy@tdtu.edu.vn Leveraging COCA to teach collocations with high mutual information scores Quy Huynh Phu Pham , Faculty of Foreign Languages, Ton Duc Thang University, Ho Chi Minh City, Viet Nam Abstract Over the past decades, the Corpus of Contemporary American English (COCA), an online corpus tool, has been effectively used in teaching collocations. However, most prior instructional interventions have focused on general collocational knowledge, often inv olving tasks where students use COCA to explore collocational patterns or correct erroneous collocations . This teaching -oriented article presents an alternative pedagogical application: using COCA to teach collocations with high mutual information (MI) sco res, a type of collocation that remains underexplored in the English language classroom despite its potential to improve learners’ collocational competence. More specifically, the present article introduces a 10 -week writing course that incorporates a rang e of COCA -based activities aimed at developing students’ ability to use collocations with high MI scores effectively. Drawing on analyses of student essays and their reflections on their learning experiences with the tool, the article discusses both the be nefits and challenges of this approach. It also offers pedagogical recommendations to inform the instruction of collocations with high MI scores in the English language classroom. Keywords: COCA , Collocations with High MI Scores , Corpus Tool, Writing Language(s) Learned in T his Study: English APA Citation: Pham , Q. H. P. (20 25).Leveraging COCA to teach collocations with high mutual information scores. Language Learning & Technology ,29(1), 1–12. Introduction The Corpus of Contemporary American English (COCA) ( Davies , 2008) is an online resource including over one billion words of American English, collected across diverse genres from 1990 to 2019. This corpus is categorized into eight genres, including popular magazines, academic texts, and fiction, making it the lar gest and most balanced freely available corpus to date ( Tsai , 2019). COCA offers a range of tools to explore linguistic patterns. For instance, the Word function provides information about word freq uency, usage across contexts, and collocates, while the Compare function allows users to analyse differences in meaning between synonym pairs. Moreover, COCA highlights target patterns in context, helping learners become more aware of how these patterns exhibit in real usage. To access COCA, users can register for a free account at -corpora.org/coca/ . Over the past years, COCA has been effectively used to enhance students’ collocational knowledge and competence (e.g., Fang et al., 2021; Li , 2023; Tsai , 2019). However, previous interventions have primarily focused on general collocational knowledge, typically involving activities where students employed COCA to explore collocational patterns (e.g., Tsai , 2019) or correct erroneous collocations (e.g., Fang et al., 2021; Li , 2023). In the present article, I demonstrate an alternative pedagogical application of COCA: teaching collocations with high mutual information (MI) scores 1 (CMS), a type of collocation that has received limited attention in the English language classroom despite its potential to 2 Language Learning & Technology improve learners’ collocational competence (e.g., Garner et al., 2019; Granger & Bestgen, 2014). In the following sections, I will first outline the instructional context and justify the importance of teaching CMS. Next, I will describe the current teaching practice, focusing on various COCA -based activities, followed by my reflections on the practica l benefits and challenges of using COCA for CMS instruction. The final section offers practical tips and suggestions to improve the current teaching practice. Instructional Context The teaching activities were implemented at an English language centre in Vietnam through an online class conducted via Skype. The class focused on IELTS writing and consisted of seven university students (one male and six females) aged 20 to 22 from vario us fields of study. Their English proficiency ranged from 5.5 to 6.0 on the IELTS scale. Two students had previously taken the IELTS test. At the beginning of the course, students were asked, “Have you ever used or heard of COCA?” They typed their response s in the Skype chat box, and all of them confirmed “no ”. The class met twice a week for two hours over 10 weeks. During the course, students were taught different types of IELTS essays and engaged in multiple activities from the two textbooks, Complete IELTS Bands 5 -6.5: Student’s Book with Answers (Brook -Hart & Jakeman, 2012) and Complete IELTS Bands 5 -6.5: Workbook with Answers (Harrison , 2012). The primary objective of the course was to develop students’ writing skills according to the four IELTS writing assessment criteria: Task Response, Coherence and Cohesi on, Lexical Resource, and Grammatical Range and Accuracy, each accounting for 25% of the total score. CMS instruction was incorporated into this writing class for several reasons. First, empirical research consistently highlights a positive correlation between CMS usage and writing quality (e.g., Garner et al., 2019). Comparative studies further show that non -native writers use CMS less frequently than native speakers (e.g., Durrant & Schmitt, 2009), while more advanced learners are more likely to use CMS than lower -level learners (e.g., Granger & Bestgen, 2014). Collectively, these findings suggest that in creasing CMS usage can significantly enhance students’ writing proficiency. Despite the role of CMS in enhancing writing quality, longitudinal studies indicate that its development is slow (e.g., Siyanova -Chanturia , 2015). Cross -sectional research further reveals that even advanced second -language (L2) learners often rely on collocations with lower MI scores (e.g., Chen , 2019). This is largely due to the lack of explicit instruction and limited opportunities for students to practice and apply CMS in meaningful contexts ( Chen , 2019). Granger and Bestgen (2014) similarly argued that pedagogical materials often focus on word -like units, such as phrasal verbs, while neglecting CMS. To help students reach an advanced level of phraseological competence, CMS instruction must be integrated into the classroom. Finally, beyond aiming for a higher IELTS score, all participants in the current course expressed a desire for activities that would help them expand their vocabulary, not just in terms of quantity, but also in learning how to use words more accurately and appropriately in context. Therefore, using COCA to incorporate CMS into their writing is highly relevant to these needs, particularly in enhancing both the range and precision of their lexical usage. Given these reasons, I designed various COCA -based activities to enhance students’ knowledge and use of CMS. COCA was chosen as the focal pedagogical tool because it is one of the few online corpus tools that allows users to search for CMS across diverse c ontexts and parts of speech. Moreover, COCA provides both MI scores and collocation frequency, enabling users to select CMS that are both highly associated and commonly used. Finally, COCA offers contextual usage examples that help improve students’ unders tanding of how CMS are applied in authentic contexts. The following section outlines the implementation of the present teaching practice, with activities developed based on Pham (2023). Quy Huynh Phu Pham 3 Description of Teaching Practices In the first week of the course, students were introduced to the course content, syllabus, and assignment submission process, with each student required to submit one essay per week using individual Google Drive folders for uploading their essays. During the remaining time (around 1.5 hours), students were trained to use COCA to search for and incorporate CMS into their writing. The training was structured into three steps. In Step 1, students learned how to search for CMS in COCA. In Step 2, they c ompleted three activities: predicting MI scores, comparing synonyms using CMS, and analysing CMS usage in expert writing. In Step 3, students used COCA to improve collocational usage in a sample paragraph 2. The following sections provide a step -by - step description of this training process. Step 1. Instruct Students to Use COCA to Search for Collocations with High MI Scores I introduced COCA to students, explaining that it contains over one billion words of American English collected from various genres. COCA offers seven major functions: List, Chart, Word, Browse, Collocates, Compare, and KWIC, which allow users to explore v arious linguistic patterns in the corpus. Students were informed that COCA offers two types of accounts: a free account, which allows up to 20 searches per day, and a premium one, which allows up to 200. Students were given 5 minutes to create a free accou nt, while upgrading to a premium account was left to their discretion. Then, I introduced the concept of MI scores to help students understand how word associations are measured and emphasize d the importance of incorporating CMS into their writing. Students were told that CMS can be retrieved using either the Collocates or Word function. However, for the purpose of this training, we focused solely on using the Word function because the Collocates function does not allow users to group collocates by part of speech, making it difficult to interpret collocational patterns. Next, I demonstrated how to use the Word function to search for collocates of a target word. For example, using “employment ” as the target word, students entered it into the search box, clicked “See detailed info for word” ( Figure 1 ) and selected the “Collocates” option ( Fig ure 2). The resulting interface ( Fig ure 3) displayed key information across four columns: frequency of the collocate pair (Column 1), MI scores (Column 2), the collocate grouped by part of speech (Column 3), and contextual usage (Column 4). For instance, “equal employment ” appeared 740 times with an MI score of 5.84, indicating a strong collocation. In contrast, “fair employment ” appeared only 115 times, with a much lower MI score of 2.48, suggesting it is not a strong collocation. Students were shown how to determine whether the node word (e.g., “employment ” in this example) occurs more frequently before or after a collocate by observing the colour shading on either side. The darker the color, the more frequently the node word appears in that position. For example, in Fig ure 3, the cell after “seek ” is highlighted, indicating that “seek employment ” is more common than “employment seek ”. In contrast, the cell before “opportunity ” is shaded, showing that “employment opportunity ” is more frequent than “opportunity employment ”. By default, collocates are sorted by frequency, with the most frequent ones highlighted in blue. For example, “full employment ” and “equal employment ” appear in the darkest blue because they are the most frequent adjective -noun combinations. To sort collocates by MI scores, students were instructed to click on “Advanced options” in Fig ure 3 . Step 1 took about 15 minutes and did not require any materials. 4 Language Learning & Technology Figure 1 “See Detailed Info for Word” Function in COCA Figure 2 “Collocates” Option in COCA Quy Huynh Phu Pham 5 Figure 3 Collocates with “Employment” Step 2. Conduct Activities to Promote Students’ Knowledge of Collocations with High MI Scores After students had gained a basic understanding of COCA and MI scores, I conducted three activities designed to (1) deepen their understanding of CMS and (2) enhance their ability to search for these collocations in COCA. Activity 1. Predicting MI Scores Students were given a list of word combinations 3 with varying MI scores (see Table 1 ) and worked in pairs to decide whether each had an MI score above 3. They wrote “Yes” if they believed it did, and “No” if it did not. Next, students worked independently to verify their predictions by using COCA to check the MI scores of the collocations . Finally, they used COCA to extract sentence examples of collocations with MI scores over 3, as listed in Table 1 , and then shared these examples with their peers. The goal of this activity was to help students develop the ability to search for CMS in COCA and assess collocations based on the MI score threshold. Students also learned how to use COCA to explore the contextual usage of CMS. This activity took about 15 minutes, with approximately 15 minutes spent on material preparation. Activity 2. Comparing Synonyms Using Collocations with High MI Scores Students were given two synonyms, “important” and “critical”, and asked to search for their collocates with high MI scores. They were asked to use “Advanced options” to sort the collocates by their MI scores. Afterward, students discussed their observation s in small groups. This activity aimed to help them understand the importance of CMS in distinguishing between closely related synonyms. When two words, such as “important” and “critical”, have similar meanings, it can be challenging to determine which words they strongly as sociate with and the subtle differences in their usage. By analysing CMS, students can make more accurate word choices and gain a deeper understanding of the nuanced differences between synonyms. For example, through the activity, students observed that “i mportant” strongly collocates with nouns such as “determinant”, “milestone”, and 6 Language Learning & Technology “caveat” ( Figure 4 ), whereas “critical” is more strongly associated with nouns like “acclaim”, “thinking”, and “pedagogy” ( Figure 5 ). This activity took about 15 minutes and did not require any materials. Table 1 A List of Sample Word Combinations Word combinations My predictions (yes/no) MI scores in COCA Adverb + important equally important increasingly important vastly important really important 4.78 3.32 2.05 1.63 Verb + tasks handle tasks organize tasks perform tasks accomplish tasks 2.63 2.41 5.57 5.75 Adject + observations careful observations scientific observations personal observations critical observations 3.55 3.47 2.81 1.92 Figure 4 Collocates with “Important ”Quy Huynh Phu Pham 7 Figure 5 Collocates with “Critical ” Activity 3. Exploring Collocations with High MI Scores in Expert Writing I selected a sample essay ( see below ) from Complete IELTS Bands 5 -6.5: Workbook with Answers (Harrison, 2012) and created a fill -in -the -blank exercise. I first explained to students that the essay was written by an IELTS expert. Students read through the essay and used COCA to identify appropriate collocations to complete the blanks, noting the M I scores of their chosen collocations. Afterward, I provided them with the answers (see Table 2 ). Students were then divided into small groups to discuss their observations of collocations used by an IELTS expert. The goal of this activity was to help students analyse CMS in expert writing. As shown in Table 2 , most collocations used by the expert writer have MI scores higher than 3, allowing students to recognize the significance of CMS in high -quality writing. Since multiple collocations could fit each blank, using COCA enabled students to develop autonomy in selecting appropriate CMS while ensuring they fit the context. This activity took 25 minutes, with approximately 25 minutes spent on material preparation. Sample Essay Nowadays, children (1) ………. many more opportunities to (2) …………. a healthy life than in the past because generally they live in hygienic surroundings and have plenty to eat. However, (3) ………… lifestyles mean that many children (4) ……….. long hours in front of the television or computers, do not take a (5) ………….. deal of exercise and eat an unbalanced diet. I believe both parents and schools can do a lot to (6) ……………. this situation. Parents should limit the time that children (7) ………….. sitting down and should encourage them to take more exercise. They could, for example, (8) …………….. sports with them at the weekend. Schools also should include (9) ……………… exercise in their timetables, with activities such as (10) …………… education and compulsory sports. (Harrison, 2012, p. 152) 8 Language Learning & Technology Table 2 Word Combinations Used by an IELTS Expert Word combinations MI scores in COCA have opportunities N/A lead (a) life N/A modern lifestyles 3.00 spend hours 4.23 great deal 4.08 remedy (this) situation 6.05 spend time 3.79 play sports 2.73 regular exercise 4.99 physical education 5.84 Step 3: Conduct a Revision Activity to Improve Students’ Use of Collocations with High MI Scores Students received a short paragraph (see below ) containing word combinations with varying MI scores (see Table 3 ). They were instructed to employ COCA to refine collocational use by considering both MI scores and contextual usage. For example, if a word combination had a low MI score or was absent from COCA, they could choose a more suitable pairing. After revising the paragraph, students collaborated in groups to discuss their revisions and the strategies they applied. Sample paragraph Firstly, university helps young people have many amazing experiences. They can make new friends, join sports clubs, or attend interesting events that allow them to explore new interests and socialize. Through these activities, students can enjoy their time at university while building strong connections with others. Secondly, university also allows young people to improve their skills, such as communication and collaboration. For example, students often need to work in groups to complete projects, solve pro blems, or prepare presentations. These experiences help them become more confident in expressing their ideas and working effectively with others. The activity aimed to help students effectively use COCA to enhance collocational usage. Revision tips were shared during class discussions. For example, to use COCA for revision, students could underline specific word combinations in their essays (e.g., a djective -noun or verb -noun pairs), particularly those involving common adjectives (e.g., amazing, new) or common verbs (e.g., have, take), and then improve the MI scores of these combinations. Another tip is that there is no need to revise every word combi nation having a low MI score, as this would require an extensive search in COCA and could be time -consuming. Instead, realistic goals should be set, such as deciding in advance how many CMS should be included in the essay. Finally, word combinations that m ake sense in context are perfectly fine to use. For instance, “enjoy time” is acceptable even if it does not qualify as a strong collocation. Therefore, contextual usage should be considered when revising the essay. This activity took 20 minutes, with appr oximately 25 minutes spent on material preparation. Quy Huynh Phu Pham 9 Table 3 Word Combinations in the Sample Paragraph Word combinations MI scores in COCA have experiences N/A amazing experiences 2.15 new friends N/A join clubs 3.80 attend events 4.12 interesting events N/A enjoy time N/A build connections 1.42 strong connections 3.07 improve skills 4.27 complete projects 3.83 solve problems 6.96 prepare presentations 3.09 To summarize, in the first week of the course, students participated in various COCA -based activities aimed at enhancing their ability to extract and incorporate CMS into their writing. In the subsequent weeks, students engaged with the textbook -based activities as instructed. Throughout the course, students were encouraged to use COCA to check CMS while writing their essays. Beyond this, no additional COCA training or COCA -based activities were provided. In the following section, I will reflect on the practical benefits and challenges of implementing the current approach, drawing on my personal observations, an analysis of student essays, and semi -structured interviews conducted at the end of the course. Practic al Benefits and Challenges From my observation, there was an increased use of CMS in student writing. To confirm this, I collected all essays submitted by students throughout the 10 -week course. Following Durrant and Schmitt (2009), 1,420 word combinations were manually extracted and categorized into different MI score bands. It was revealed that 42.68% of the collocations had an MI score greater than 3. Additionally, the proportion of CMS ranged from 31.58% to 51 .72% across individual students, indicating a substantial increase in t heir use of CMS. Another observation was that students demonstrated greater autonomy in their writing process. When I marked students’ essays, I observed that they developed various strategies to annotate and record CMS, such as highlighting them in the essays and categorizing them into different parts of speech. A few students even noted the exact MI scores of the collocations they used and marked replacements with higher MI scores. These strategies indicate students’ proactive efforts to integrate CMS into their writing. I believe that the reason for this independent use of COCA stemmed from students recognizing the tool’s 10 Language Learning & Technology benefits. The semi -structured interviews revealed that students identified four main advantages of using COCA to extract CMS, including (1) enhancing the naturalness of their writing, (2) helping generate new ideas, (3) improving collocational accuracy and appropriateness, and (4) increasing chances of achieving higher scores in writing assessments. These benefits have also been documented in previous studies that highlighted COCA’s role in improving collocational competence (e.g., Fang et al., 2021; Li , 20 23; Tsai , 2019). Despite these practical benefits, I have also observed several challenges related to the implementation of the current teaching practice. The first challenge is the time -consuming nature of using COCA to search for CMS. Although COCA is a valuable tool, it often produces long lists of collocations that can overwhelm students. Those unfamiliar with how to interpret the output may struggle to quickly identify suitable CMS for their writing tasks. In fact, three out of the seven students I interviewed mentione d this issue, with one noting the difficulty of efficiently finding appropriate CMS for her essays. Another practical limitation is COCA’s restriction on 20 searches per day for free accounts. This issue was experienced by four students, one of whom shared, “ One challenge I encounter is that the search is limited to 20 queries for a free COCA account. Whenever I write, I must use 7 different accounts, which is quite inconvenient and time -consuming .” Such disruptions during the writing process can be frustrating, as they could slow down progress and negatively affect students' learning motivation. Finally, due to the curriculum, I was only able to conduct one COCA training workshop in the first week of the course. As a result, I was unable to monitor students’ difficulties and provide timely support and guidance. In the interviews, three students ex pressed a desire for more opportunities to engage with COCA. In particular, one commented, “ The in -class activities are fine, but we should have more after - class activities to reinforce students’ understanding and use of these collocations ”. Lessons Learned Based on my observations and student sharing, I would like to offer some practical tips to improve the current teaching practice. First, to address the challenge of COCA’s time -consuming and inconvenient search process, the teacher could organize collaborative work. For example, if the writing topic is about movies, the teacher could provide a list of key words or have stud ents brainstorm related terms, such as “director” , “actor” , “screenplay” , and “location.” Each student would then select 3 -5 key words, search for CMS in COCA, and share their findings with classmates. This approach would not only reduce search time but al so foster a collaborative learning environment. Second, the teacher could organize reflection and sharing sessions midway through the course. These sessions would allow students to discuss challenges and exchange strategies for using COCA effectively. The teacher could offer tips on managing extensive c ollocation lists, such as prioritizing familiar collocations, noting down only the top five CMS, or keeping a journal of useful collocations after each essay. Third, the teacher could help students set clear, manageable goals for the number of CMS they aim to include in their essays. This can prevent students from feeling overwhelmed with editing their essays. To determine an appropriate target, the teacher coul d analyze a sample essay. For example, in the sample essay used in Activity 3 of Step 2, I counted 16 CMS. I shared this finding with the students, explaining that a good IELTS essay typically includes this number of CMS. I suggested that if students did n ot feel confident using CMS, they could start with 5 CMS in the first essay and gradually increase the target in subsequent essays. Fourth, for teachers unfamiliar with using COCA to extract CMS, the preparation time may seem extensive. However, it took me a maximum of 1.5 hours to prepare all the activities. For Step 1, no material preparation is required, as the teacher simply explai ns COCA’s functions. Step 2 involves three activities : (a) Activity 1 (selecting three words to search in COCA) , (b) Activity 2 (comparing a synonym pair) required minimal preparation , and (c) Activity 3, which I had students analyze only the first half of aQuy Huynh Phu Pham 11 sample essay (about 130 words). In Step 3, I wrote a short model paragraph myself, which was just 94 words. My recommendation is to keep everything simple. This would prevent overwhelming students when using COCA to analyze CMS while significantly reducing preparation time. Another time -saving strategy is to reuse these activities and materials for future training courses or record a short instructional video explaining COCA’s functions and then share it with students in advance. Fifth, to further reinforce students’ understanding and use of CMS, the teacher could incorporate additional after -class exercises, such as fill -in -the -blank or multiple -choice tasks. However, if class time is an issue, an alternative approach is to integr ate COCA into existing textbook activities. For instance, in the IELTS textbook used in my writing class, there is a “Vocabulary and Grammar Review” section at the end of every two units. The teacher could ask students to use COCA to check whether the coll ocations suggested in the textbook have an MI score over 3. This approach would not only strengthen students’ COCA skills but also help them compare textbook collocations with those used in real -life contexts. One final suggestion is that the activities described above are not limited to IELTS writing alone. The concept of analyzing CMS can be adapted to other writing contexts, such as academic, business, or even creative writing. For example, the teacher could have students analyze CMS in the British Academic Written English (BAWE) corpus, which comprises nearly 3,000 university student texts across various disciplines and levels of study ( Nesi & Gardner, 2018). By comparing CMS across different contexts and le vels, students can gain a deeper understanding of how collocational choices impact the overall quality of writing and then use COCA to improve their collocational use accordingly. Acknowledgements I would like to thank the students who participated in my writing course and shared their insightful perspectives on their learning experiences. I also wish to express my gratitude to the editor, Matt Kessler, for his valuable feedback and support throughout the process, and to the reviewers fo r their constructive comments and suggestions, which greatly improved the quality of the manuscript. All remaining errors are my own. Notes MI scores are a measure of the association strength between two collocates. A collocation with an MI score of 3.0 or higher is considered a true collocation ( Hunston , 2002). However, MI scores present several limitations, notably their strong preference for rare word combinations ( Gablasova et al., 2017). Alternative measures, such as MI -squared and Log Dice, are often considered more appropriate for assessing collocational strength ( Gablasova et al., 2017). Despite these alternatives, the current practice focuses on MI scores because COCA provides only MI values, and the commonly used threshold of 3.0 offers a simple and practical way to explain collocational s trength to students. The sample paragraph in Step 3 was prepared by the author. The chosen words are not specifically related to the IELTS exam. They were selected because they are easy for students to work with. According to the Oxford Learner’s Dictionaries ( ), important is at A1, task at A2, and observation at B2. Since the primary goal of this activity is to help students improve their ability to use COCA to search for CMS and extract sentence examples, I aimed to avoid overwhelming them with words that migh t be too challenging. References Brook -Hart , G., & Jakeman, V. (2012). Complete IELTS bands 5 -6.5: Student's book with answers . Cambridge University Press. 12 Language Learning & Technology Chen , W. (2019). Profiling collocations in EFL writing of Chinese tertiary learners. RELC Journal , 50 (1), 53 –70. Davies , M. (2008). The Corpus of Contemporary American English (COCA) . - corpora.org/coca/ . Durrant , P., & Schmitt, N. (2009). To what extent do native and non -native writers make use of collocations? International Review of Applied Linguistics , 47 (2), 157 –177. Fang , L., Ma, Q. , & Yan, J. (2021). The effectiveness of corpus -based training on collocation use in L2 writing for Chinese senior secondary school students. Journal of China Computer -Assisted Language Learning , 1(1), 80 –109. -2021 -2004 Gablasova , D., Brezina, V., & McEnery, T. (2017). Collocations in corpu s-based language learning research: Identifying, comparing, and interpreting the evidence. Language Learning , 67 (S1), 155 –179. Garner , J., Crossley, S., & Kyle, K. (2019). N -gram measures and L2 writing proficiency. System , 80 , 176 –187. Granger , S. , & Bestgen, Y. (2014). The use of collocations by intermediate vs. advanced non -native writers: A bigram -based study. International Review of Applied Linguistics in Language Teaching , 52 (3), 229 –252. -2014 -0011 Harrison , M. (2012). Complete IELTS bands 5 -6.5: Workbook with answers . Cambridge University Press. Hunston , S. (2002). Corpora in applied linguistics . Cambridge University Press. Li , L. X. (2023). Promoting accuracy of collocation use in L2 writing: The role of data -driven learning in indirect corrective feedback. Computer Assisted Language Learning . Advance online publication . Nesi , H., & Gardner, S. (2018). The BAWE corpus and genre families classification of assessed student writing. Assessing Writing , 38 , 51 –55. Pham , Q. H. P. (2023). Using COCA to promote students' awareness and use of collocations with high mutual information scores. TESOL Journal , 14 , Article e706. Siyanova -Chanturia , A. (2015). Collocation in beginner learner writing: A longitudinal study. System , 53 , 148 –160. Tsai , K. J. (2019). Corpora and dictionaries as learning aids: Inductive versus deductive approaches to constructing vocabulary knowledge. Computer Assisted Language Learning , 32 (8), 805 –826. About the Author Quy Huynh Phu Pham is a lecturer in the Faculty of Foreign Languages at Ton Duc Thang University in Ho Chi Minh City, Vietnam, and a Ph.D. candidate in Applied Linguistics at the University of Queensland, Australia. His research focuses on corpus linguistics, quantitative m ethods, and second language acquisition. E-mail: phamhuynhphuquy@tdtu.edu.vn ORCiD: -0001 -6474 -9887
189317
https://bmccancer.biomedcentral.com/articles/10.1186/s12885-024-13316-0
BMC Cancer Comparing supraclavicular surgery with radiotherapy versus radiotherapy alone in breast cancer patients with ipsilateral supraclavicular lymph node metastasis: a two-center retrospective cohort study Download PDF Download PDF Research Open access Published: Comparing supraclavicular surgery with radiotherapy versus radiotherapy alone in breast cancer patients with ipsilateral supraclavicular lymph node metastasis: a two-center retrospective cohort study Yao Chen1, JinLan He2, TianYi Song3, YuNa Zhang1, Jie Chen1, XiaoDong Wang1 & … Yan Li4,5 BMC Cancer volume 24, Article number: 1572 (2024) Cite this article 1589 Accesses 2 Altmetric Metrics details Abstract Background This study aimed to assess combined supraclavicular lymph node dissection (SLND) and radiotherapy (RT) versus standalone radiotherapy for efficacy in newly diagnosed breast cancer patients with ipsilateral supraclavicular lymph node metastasis (ISLNM). Methods Totally 143 ISLNM patients treated between 2014 and 2021 in two medical institutions were examined retrospectively. Patients were divided into two groups to undergo combined SLND and radiotherapy (surgery + RT, n = 73) or radiotherapy alone (RT, n = 70). The effects of SLND on disease-free survival (DFS), breast cancer-specific survival (BCSS), and overall survival (OS) were assessed by Kaplan-Meier analysis and Cox regression models. Results During a median follow-up of about 35 months, 18.2% of patients died. Five-years OS, BCSS, and DFS rates in the RT and surgery + RT groups were 79.2% and 69.4% (P = 0.21), 82.2% and 79.4% (P = 0.29), and 56.1% and 53.1% (P = 0.70), respectively. In multivariable analysis, SLND didn’t significantly impact these outcomes, a finding consistent across multiple subgroups. However, Estrogen receptor expression, the presence of vascular cancer emboli, and surgical approach differentially affected DFS, BCSS, and OS. Furthermore, patients with residual supraclavicular lymph node tumors post-surgery had lower DFS (43.7% vs. 73.2%) and OS (68.7% vs. 90.2%) rates compared with counterparts without residual lymph nodes. Residual supraclavicular lymph node tumor was an independent risk factor for DFS (HR = 4.191, 95%CI 1.755–10.007; p = 0.001) and OS (HR = 3.781, 95%CI 1.025–13.486; p = 0.046) in breast cancer patients with ISLNM. Conclusions Breast cancer patients with synchronous ISLNM may not benefit from SLND. The clinical decision-making for ISLNM patients should be carefully considered. Prospective studies are needed to validate the results. Peer Review reports Introduction The International Agency for Research on Cancer of the World Health Organization reported in 2021 that breast cancer, with 2.26 million new global cases in 2020, has surpassed lung cancer as the leading malignancy in women , contributing to a 25% mortality rate. This underscores the significant threat breast cancer poses to women’s health . Despite important advances in treatment, prognosis in patients with supraclavicular lymph node metastasis (SLNM) remains poor . Breast cancer with SLNM without distant metastases at the initial diagnosis comprises 1–4% of all breast cancer cases , with five-year survival rates ranging from 30–47% [5, 6]. In 2002, the American Joint Committee on Cancer (AJCC) reclassified SLNM as a regional rather than a distant metastasis, implying a potentially curable condition with a more favorable prognosis compared with breast cancer with distant metastasis [7,8,9]. Consequently, these patients are recommended to undergo comprehensive treatment, including neoadjuvant chemotherapy, radical mastectomy, and postoperative radiotherapy, to achieve a cure . In patients with complete remission after neoadjuvant systemic therapy, the efficacy of radiotherapy is crucial for controlling tumors in the supraclavicular region . Clinically, patients with metastases in supraclavicular lymph nodes are typically treated with a radiation dose of 45–50 Gy, with an additional 10–20 Gy administered based on effectiveness . Higher radiation doses are often required for patients with distant metastases, which are associated with poor prognosis in breast cancer cases with supraclavicular lymph node metastases, as well as for those not achieving complete remission after neoadjuvant chemotherapy, which is an independent factor affecting survival. However, there is no direct correlation between high radiation doses and therapeutic effectiveness . The controversy surrounding the role of surgical removal of supraclavicular lymph nodes in addition to radiotherapy persists, despite the consensus regarding the effectiveness of radiotherapy. Sun et al. highlighted distant metastasis as the primary cause of treatment failure in stage IIIc breast cancer patients. Consequently, the significance of supraclavicular lymph node dissection in patient survival is deemed insignificant among various factors. Ai et al.‘s 2020 study concluded that surgery effectively confines tumors to the supraclavicular region, preventing further spread. Similarly, Tezuka et al. (2011) found that supraclavicular lymph node dissection (SLND) could improve patient survival rates, advocating its aggressive application in patients without distant metastases. However, identifying specific patient groups who would benefit from SLND requires further investigation . Thus, survival benefits of SLND in breast cancer patients with initial ipsilateral supraclavicular lymph node metastasis (ISLNM) remain uncertain. Thus, we performed a retrospective cohort study to initially examine the impact of SLND on survival rate in patients with ISLNM of breast cancer. This study analyzed prognostic factors in these patients and explored the therapeutical significance of supraclavicular lymph node dissection. Materials and methods Study population This was a retrospective cohort study of breast cancer patients diagnosed with ipsilateral supraclavicular lymph node metastasis by supraclavicular lymph node puncture pathology biopsy at the first visit to the Breast Disease Center of West China Hospital of Sichuan University and the Sichuan Provincial Cancer Hospital during the period from January 2014 to December 2021. Inclusion criteria were: (a) stage T1-4 N3cM0 breast cancer diagnosed as ipsilateral supraclavicular lymph node metastasis by lymph node coarse/fine needle aspiration biopsy at the initial diagnosis; (b) radiotherapy of the ipsilateral supraclavicular lymph node after surgery; (c) initial diagnosis from 2014 to 2021; (d) age ≥ 18 years and < 75 years; (e) female sex. Exclusion criteria were: (a) primary diagnosis of distant metastasis; (b) previous systemic therapy for other tumors. Clinical and pathological data Clinical data were collected from electronic medical records, including the patient’s name, age at diagnosis, menstrual status, type of breast surgery, tumor location and size, presence of choroidal cancer embolus, number of involved axillary lymph nodes, postoperative paraffin analysis, preoperative-neoadjuvant chemotherapy, supraclavicular lymph node status, presence of residual tumors in supraclavicular lymph nodes, administration and evaluation of neoadjuvant and postoperative chemotherapy, use of adjuvant chemotherapy, and site of radiotherapy. Preoperative ultrasound was carried out in all patients to assess supraclavicular lymph nodes, subsequent to neoadjuvant chemotherapy or based on the most recent preoperative ultrasound report. The patients were categorized as negative (supraclavicular lymph nodes with no obvious structural abnormality) and positive (enlarged supraclavicular lymph nodes with structural abnormality detected by ultrasound) cases. Estrogen receptor (ER), progesterone receptor (PR), and human epidermal growth factor 2 (HER2) expression data were obtained from postoperative immunohistochemistry results. Tumor size was assessed employing the AJCC 8th edition TNM staging system for breast cancer. Tumors with a maximum diameter ≤ 20 mm on initial ultrasound scans were classified as T1, those with a maximum diameter > 20 mm and ≤ 50 mm as T2, and those with a maximum diameter > 50 mm as T3. T4 was defined as tumors of any size with direct invasion of the chest wall and/or skin. Treatments The included patients underwent surgery for breast primary and axillary lymph node dissection, followed by postoperative radiotherapy. Systemic therapy was administered to each patient, with varying degrees of neoadjuvant therapy and/or adjuvant therapy (According to different types, paclitaxel, anthracycline, platinum, etc. are used for chemotherapy with 4–8 cycles). Hormone receptor-positive tumor cases received endocrine therapy and chemotherapy, while HER2-positive cases in two groups were both administered the same Trastuzumab +/- Pertuzumab targeted therapy. The choice of surgical approach for primary breast cancer management was chosen individually by factors such as tumor size, degree of invasion, tumor location, and patient selection. All patients underwent axillary lymph node dissection. Some also had more extensive cervical lymph node dissection, including supraclavicular lymph nodes. Patients with broader lymph node dissection beyond supraclavicular nodes were not separately analyzed. Instead, they were categorized as having enlarged radical surgery and were included in the supraclavicular lymph node dissection group. All patients received postoperative radiotherapy, administered systematically and standardized by a radiologist, tailored to their specific conditions. The dose and technique of radiotherapy as follows: For patients who underwent supraclavicular lymph node dissection, we utilized intensity-modulated radiation therapy (IMRT) with a total dose of 50 Gy. For those who did not undergo lymph node dissection and received radiotherapy alone, we administered a baseline dose of 50 Gy to the chest wall and supraclavicular subclinical lesion areas, with an additional boost of 10–16 Gy targeted at positive lymph nodes and tumor regions. Follow-up and outcomes The included patients were followed up until December 31, 2022. During the regular follow-up within a 3-year period, each patient is required to undergo thoracic and abdominal computed tomography (CT) scans as well as breast color Doppler ultrasonography at every three-month interval. In cases where it is necessary, magnetic resonance imaging (MRI) of the head, bone scintigraphy, and relevant blood biomarker assays should be performed. From the 3rd to the 5th year, reexamination can be carried out biannually. After 5 years, annual reexamination is conducted. If there are any abnormal symptoms and physical signs, the reexamination can be advanced. The primary endpoint was disease-free survival (DFS) and secondary endpoints were overall survival (OS) and breast cancer-specific survival (BCSS). DFS was defined as the time from the start of surgery to recurrence, metastasis or death from any cause. OS was the time from surgery to death from any cause. BCSS was the time from surgery to death from progression of breast cancer. Statistical analyses Statistical analysis used R (version 4.1.3) and SPSS 26 (SPSS Inc., Chicago, IL, USA). Categorical variables were expressed as count (percentage) and compared by the chi-square test. Survival analysis was performed by the Kaplan-Meier method, and survival curves were plotted for both patient groups and compared by the log-rank test. Univariate analysis of the effects of study variables on OS, BCSS, and DFS used Cox regression. Multifactorial Cox regression analysis was then performed. Statistical significance was considered at P < 0.05. Results Totally 143 breast cancer patients with ISLNM were examined in this study, including 117 and 26 treated in the West China Hospital of Sichuan University and Sichuan Provincial Cancer Hospital, respectively. The baseline characteristics of all eligible patients are shown in Table 1. Mean and median patient age were both 50 (26–72) years old, and 77 (53.8%) patients were menopausal. The commonest type of breast cancer was HER2 overexpression (48.3%), followed by Luminal A (21%). Neoadjuvant chemotherapy was administered to 125 (87.4%) patients, with all 73 cases in the surgery + RT group receiving this treatment preoperatively. Post-chemotherapy, 113 patients (79%) showed a partial response, 10 (7%) had stable disease (SD), and 2 (1.4%) had progressive disease (PD). Regarding surgical procedures, 109 patients (76.2%) underwent a modified radical mastectomy, 19 (13.3%) had radical surgery, 9 (6.3%) opted for breast-conserving surgery, and 6 (4%) had conventional extended radical surgery. The participants were divided into two groups based on treatment choice, including the supraclavicular lymph node dissection plus RT (surgery + RT, n = 73) and RT (n = 70) groups. Individuals in the surgery + RT group had a higher tendency to choose extensive radical mastectomy and radical surgery, and none of the patients opt for breast-conserving surgery relative to the RT group. Meanwhile, there were no significant differences in age, postmenopausal status, clinical T stage, ER positivity, PR positivity, HER-2 positivity, vascular cancer embolus, recurrence or metastasis and deaths between the two groups. In addition, among all the patients, 32 patients (94.1%) with HER2 positive type showed partial response after receiving neoadjuvant chemotherapy in the surgery + RT group, while 23 patients (88.5%) in the RT group showed partial response after receiving neoadjuvant chemotherapy, with no significant statistical difference between the two groups. During a median follow-up of 35 months, tumor recurrence or metastasis was observed in 59 (41.3%) patients, and 26 (18.2%) deaths were recorded. Of the 70 patients in the RT group, tumor recurrence/metastasis occurred in 30 (42.9%), leading to 12 (17.1%) deaths. Meanwhile, of the 73 patients with SLND, 29 (39.7%) experienced tumor recurrence/metastasis, resulting in 14 (19.2%) deaths. As shown in Fig. 1, Kaplan-Meier survival analyses revealed similar survival rates in the RT and surgery + RT groups. Five-year OS, BCSS, and DFS rates (predicted) in the RT and surgery + RT groups were 79.2% and 69.4% (P = 0.21), 82.2% and 79.4% (P = 0.29), and 56.1% and 53.1% (P = 0.70), respectively. A Cox regression model was utilized to investigate the correlations between baseline clinicopathological variables and disease prognosis. In univariate analysis shown in Table 2, ER negativity was significantly correlated with OS (HR = 4.401, 95%CI 1.924–10.065; p < 0.001), BCSS (HR = 4.753, 95%CI 1.873–12.066; p = 0.001) and DFS (HR = 1.866, 95%CI 1.111–3.133; p = 0.018). PR positivity was correlated with OS (HR = 2.446, 95%CI 1.103–5.423; p = 0.028), and the presence of vascular cancer embolus was correlated with DFS (HR = 2.057, 95%CI 1.18–3.587; p = 0.011). However, SLND had no effects on OS (HR = 21.668, 95%CI 0.749–3.715; p = 0.211), BCSS (HR = 1.617, 95%CI 0.66–3.96; p = 0.293) and DFS (HR = 1.107, 95%CI 0.661–1.853; p = 0.699). Multivariate analysis was conducted to further elucidate these relationships, and the results demonstrated that age and ER negativity were independent risk factors for OS, BCSS and DFS, while the presence of vascular cancer embolus, modified radical mastectomy and breast-conserving surgery were independently associated with DFS. However, SLND remained unrelated with patient outcomes in breast cancer with ISLNM. We next investigated the associations of SLND with outcomes in subgroups of patients with different features. The results demonstrated that SLND was still not associated with DFS in patients with age ≤ 50 or > 50 years (Fig. 2A, B), with T1-2 or T3-4 (Fig. 2C, D), with ER positivity or negativity (Fig. 2E, F), and with or without vascular cancer embolus (Fig. 2G, H). To further examine risk factors for breast cancer prognosis, Kaplan-Meier survival analyses were performed. As shown in Fig. 3, Kaplan–Meier survival analysis indicated that patients with modified radical mastectomy and breast-conserving surgery had higher OS, BCSS and DFS rates, which generally corroborated the above multifactor cox regression analysis (Table 3). Besides, five-year OS, BCSS, and DFS in ER-positive and ER-negative cases (predicted) were 85% and 53.8% (P = 0.00014), 87.4% and 67.5% (P = 0.00034), and 60.2% and 45% (P = 0.017), respectively, suggesting ER-negative patients had a worse outcome. Of the 73 patients administered SLND, 32 (43.8%) had positive postoperative pathological results, while 41 (56.2%) had negative results. The baseline clinical conditions are detailed in supplement Table 1. Individuals with residual supraclavicular lymph node tumors had higher percentages of ER positivity, PR positivity, HER-2 negativity and ≥ 10 ALNs involved than those without residual supraclavicular lymph node tumors (all p < 0.05). In multivariate analysis (Table 4), age was independently associated with OS (HR = 0.929, 95%CI 0.865–0.998; p = 0.043) and BCSS (HR = 0.902, 95%CI 0.828–0.983; p = 0.018); ER negativity was independently associated with OS (HR = 10.93, 95%CI, 2.668–44.781; p = 0.001), BCSS (HR = 14.275, 95%CI 2.384–85.463; p = 0.004) and DFS (HR = 4.057, 95%CI 1.746–9.423; p = 0.001); residual supraclavicular lymph node tumors were associated with OS (HR = 3.781, 95%CI 1.025–12.486; p = 0.046) and DFS (HR = 4.191, 95%CI, 1.755–10.007; p = 0.001), and modified radical mastectomy was associated with OS (HR = 0.138, 95%CI 0.021–0.905; p = 0.039) and BCSS (HR = 0.049, 95%CI, 0.005–0.518; p = 0.012). Discussion Supraclavicular lymph node dissection coupled with radiotherapy did not enhance patient prognosis in breast cancer with ISLNM. Factors such as ER receptor expression, presence of vascular cancer emboli, and surgical approach exhibited varying effects on DFS, BCSS, and OS. Cases with ER negativity and vascular cancer emboli showed a relatively poorer prognosis, necessitating vigilance for disease recurrence/metastasis during follow-up, particularly in cases with ipsilateral cervical lymph nodes. The preferred surgical approach may be modified radical surgery, as it can ensure complete tumor resection and no subclavian lymph node enlargement. The achievement of pathological complete response in supraclavicular lymph nodes after neoadjuvant chemotherapy signified a more favorable prognosis. Evidence indicates that the prognosis of advanced breast cancer patients with ISLNM is generally poor. Fodor et al. reported a 3-year overall survival rate of only 5% in 1999, while Halverson et al. found a 5-year survival rate of 13%, significantly lower than 35% and 37% reported for patients with chest wall or axillary recurrences in 1992, respectively . These metastases are often considered distant and incurable, necessitating aggressive systemic therapy . However, studies have suggested a more hopeful outlook. Olivotto et al. reported a 5-year survival rate of 33.3%, with 13% of patients surviving for 20 years post-diagnosis . In addition, Brito et al. found that patients with supraclavicular lymph node metastasis and local recurrence had significantly higher 5-year overall survival rate compared with those with distant metastases . This led to a reclassification of breast cancer with ipsilateral supraclavicular lymph node metastasis as a curable disease, adjusting it to stage IIIc in the 6th edition of the AJCC staging in 2002 . Consequently, in patients with ipsilateral supraclavicular lymph node metastasis and no other evidence of distant metastasis, the treatment goal should be tumor eradication. This could be achieved by a combination of neoadjuvant chemotherapy, surgery, radiotherapy, and endocrine therapy . Our comparison of different local treatments for prognosis revealed no significant differences in OS, BCSS, and DFS between the uncleared and cleared supraclavicular lymph node groups (P > 0.05). This corroborates findings by Chang, Liu, Sun, Kim et al. [11, 14, 18, 19]. Chang et al.‘s retrospective analysis of 29 patients with ISLNM and no distant metastases, found no significant difference in OS between individuals administered supraclavicular lymph node dissection and those who received radical radiotherapy . Similarly, Liu et al. found no significant differences in DFS and OS between patients with and without supraclavicular lymph node dissection . Meanwhile, studies have suggested supraclavicular lymph node dissection improves prognosis [12, 20]. Ai et al. contended that surgery could restrict the tumor to the supraclavicular region, thereby preventing its further spread . Our findings aligned with most current clinical retrospective studies, suggesting that systemic therapy should be considered the primary treatment for advanced disease, with local lesion management playing a secondary role. This study also supports this recommendation. Specifically, the current findings indicate no additional benefit from supraclavicular lymph node dissection in presence of radiotherapy. Therefore, caution is advised when considering this procedure in patients with supraclavicular lymph node metastases. However, given the retrospective nature of this study, the potential for bias, and the small sample size (143 cases), further prospective studies with larger samples are warranted. In this study, 62.2% of patients showed positive expression of ER, a key molecular target in breast cancer pathogenesis, while 37.8% were ER negative. ER status independently affected DFS, OS, and BCSS, with ER-positive cases demonstrating superior outcomes. This corroborates Sun et al.‘s study, which reported significant differences in 5-year DFS (43.6% for ER-positive vs. 18.5% for ER-negative, P = 0.003) and OS (89.4% for ER-positive vs. 49.5% for ER-negative, P < 0.01) rates . ER negativity was associated with poor prognosis, corroborating the current findings. ER status is a crucial determinant across all clinical stages of breast cancer, as underscored in this study for N3c patients. While PR and HER2 have been identified as significant prognostic factors in other studies [11, 18, 20], this work found no impact of PR status. This was different from the common understanding of ER’s role in regulating PR, potentially due to the lack of one-to-one correspondence between ER and PR statuses in the present patient cohort. Four surgical approaches are commonly utilized in primary breast cancer management, i.e., extensive radical mastectomy, radical mastectomy, modified radical mastectomy, and breast-conserving surgery. The procedure choice is affected by factors such as tumor size, infiltration degree, tumor location, and patient selection. A study by Chen et al. found no significant differences in OS and disease-specific survival (DSS) between patients administered localized (breast-conserving surgery, subcutaneous mastectomy with nipple-areola and skin preservation, and simple mastectomy with axillary lymph node preservation) and radical (modified radical, radical, and expanded radical) surgeries (P = 0.1994 and P = 0.1738, respectively). However, this study suggested that patients administered modified radical or breast-conserving surgery have a better prognosis than those administered extended radical surgery. This discrepancy may be due to the greater trauma caused by larger surgeries, which may negatively impact the immune system and patient survival. Additionally, use of more traumatic surgical procedures may indicate that the patients had relatively more advanced disease, potentially biasing the results. Vascular cancer embolism significantly affects patient prognosis, as evidenced by Hasebe et al., who identified this index as a key predictor of local recurrence in invasive ductal carcinoma . This study corroborates the latter finding, demonstrating that the presence of vascular cancer emboli significantly impacts DFS. Patients with vascular cancer emboli had a higher risk of disease recurrence/metastasis compared with those without. The prognostic value of residual supraclavicular lymph node tumors following dissection in patients with ISLNM remains controversial. Previous evidence indicates that pathologic complete response (pCR) in breast and axillary sites could predict long-term clinical benefits post-surgery, particularly in individuals with triple-negative and HER-2-positive disease [23, 24]. However, the prognostic value of supraclavicular lymph node pCR post-neoadjuvant chemotherapy remains unexplored. Zhu et al. proposed that surgical removal of supraclavicular lymph nodes to evaluate the pCR rate is crucial for patient prognosis. Consistently, the present study found significantly higher OS and DFS rates in patients without postoperative supraclavicular lymph node tumor residue compared with those with such tumor residue. Pathological confirmation of residual supraclavicular lymph node tumors was associated with lower DFS and OS. However, this study found supraclavicular lymph node dissection did not improve DFS, BCSS, and OS, and the absence of tumor residue in postoperative pathological analysis of supraclavicular lymph nodes was associated with higher OS and DFS. Therefore, supraclavicular lymph node dissection may be required to ascertain the presence of tumor residue, which may have significant prognostic implications. This study had several limitations. Firstly, this was a retrospective study, which may introduce bias. Although efforts were made to reduce bias via statistical analyses, it was not possible to fully eliminate its impact on the obtained results. Secondly, the current study had a large time span, making it difficult to collect specific information on medication regimens. Thirdly, the included patients were treated in different medical wards across two hospitals, including West China Hospital and Sichuan Cancer Hospital. Due to the lack of authoritative guidelines on clearing supraclavicular lymph nodes, each doctor had their own individual standards, leading to inconsistencies in treatment approaches. Fourthly, the missing data of about 25.7% of patients in RT group on response to chemotherapy might bring bias to the overall results, which needs further verification. Finally, no external validation was performed in this study, which limits the generalizability of the current findings. In this study, supraclavicular lymph node dissection added to radiotherapy did not improve the prognosis of breast cancer patients with ISLNM. During follow-up, it is important to watch for disease recurrence/metastasis, especially in ipsilateral cervical lymph nodes. In addition, the absence of tumor residue in supraclavicular lymph nodes after neoadjuvant chemotherapy is an indicator of a better patient prognosis. Whether there is a method for detecting the presence of tumor residue in supraclavicular lymph nodes other than clearance surgery is worth investigating. Data availability All data generated or analysed during this study are included in this published article and its supplementary information files. References Sung H, Ferlay J, Siegel RL, et al. Global Cancer statistics 2020: GLOBOCAN estimates of incidence and Mortality Worldwide for 36 cancers in 185 countries. Cancer J Clin. 2021;71:209–49. Article Google Scholar 2. Akram M, Iqbal M, Daniyal M, et al. Awareness and current knowledge of breast cancer. Biol Res. 2017;50:33. Article PubMed PubMed Central Google Scholar 3. Dellapasqua S, Bagnardi V, Balduzzi A, et al. Outcomes of patients with breast cancer who present with ipsilateral supraclavicular or internal mammary lymph node metastases. Clin Breast Cancer. 2014;14:53–60. Article PubMed Google Scholar 4. Chen SC, Chen MF, Hwang TL, et al. Prediction of supraclavicular lymph node metastasis in breast carcinoma. Int J Radiat Oncol Biol Phys. 2002;52:614–9. Article PubMed Google Scholar 5. Pergolizzi S, Adamo V, Russi E, et al. Prospective multicenter study of combined treatment with chemotherapy and radiotherapy in breast cancer women with the rare clinical scenario of ipsilateral supraclavicular node recurrence without distant metastases. Int J Radiat Oncol Biol Phys. 2006;65:25–32. Article PubMed Google Scholar 6. Olivotto IA, Chua B, Allan SJ, et al. Long-term survival of patients with supraclavicular metastases at diagnosis of breast cancer. J Clin Oncology: Official J Am Soc Clin Oncol. 2003;21:851–4. Article Google Scholar 7. Brito RA, Valero V, Buzdar AU, et al. Long-term results of combined-modality therapy for locally advanced breast cancer with ipsilateral supraclavicular metastases: the University of Texas M.D. Anderson Cancer Center experience. J Clin Oncology: Official J Am Soc Clin Oncol. 2001;19:628–33. Article CAS Google Scholar 8. Cheng JC, Chen CM, Liu MC, et al. Locoregional failure of postmastectomy patients with 1–3 positive axillary lymph nodes without adjuvant radiotherapy. Int J Radiat Oncol Biol Phys. 2002;52:980–8. Article PubMed Google Scholar 9. Gradishar WJ, Moran MS, Abraham J, et al. Breast Cancer, Version 3.2022, NCCN Clinical Practice guidelines in Oncology. J Natl Compr Cancer Network: JNCCN. 2022;20:691–722. Article PubMed Google Scholar 10. Singletary SE, Allred C, Ashley P, et al. Revision of the American Joint Committee on Cancer staging system for breast cancer. J Clin Oncology: Official J Am Soc Clin Oncol. 2002;20:3628–36. Article Google Scholar Sun XF, Wang YJ, Huang T, et al. Comparison between surgery plus radiotherapy and radiotherapy alone in treating breast cancer patients with ipsilateral supraclavicular lymph node metastasis. Gland Surg. 2020;9:1513–20. Article PubMed PubMed Central Google Scholar 12. Ai X, Wang M, Li J, et al. Supraclavicular lymph node dissection with radiotherapy versus radiotherapy alone for operable breast cancer with synchronous ipsilateral supraclavicular lymph node metastases: a real-world cohort study. Gland Surg. 2020;9:329–41. Article PubMed PubMed Central Google Scholar 13. Tezuka K, Dan N, Tendo M, et al. [A case of breast cancer with postoperative metastasis to the supraclavicular lymph nodes-recurrence-free survival achieved by surgical excision following chemotherapy]. Gan Kagaku Ryoho Cancer Chemother. 2011;38:1345–7. Google Scholar 14. Chang XZ, Yin J, Sun J et al. A retrospective study of different local treatments in breast cancer patients with synchronous ipsilateral supraclavicular lymph node metastasis. J Cancer Res Ther. 2013;9 Suppl 3:S158–61. 15. Fodor J, Toth J, Major T, et al. Incidence and time of occurrence of regional recurrence in stage I-II breast cancer: value of adjuvant irradiation. Int J Radiat Oncol Biol Phys. 1999;44:281–7. Article CAS PubMed Google Scholar 16. Halverson KJ, Perez CA, Kuske RR, et al. Survival following locoregional recurrence of breast cancer: univariate and multivariate analysis. Int J Radiat Oncol Biol Phys. 1992;23:285–91. Article CAS PubMed Google Scholar 17. Liu XH, Zhang L, Chen B. A meta-analysis of the prognosis in patients with breast cancer with ipsilateral supraclavicular lymph node metastasis versus patients with stage IIIb/c or IV breast cancer. Chronic Dis Translational Med. 2015;1:236–42. Article Google Scholar 18. Liu BW, Chen LX, Ma K, et al. The role of surgery on locoregional treatment of patients with breast cancer newly diagnosed with ipsilateral supraclavicular lymph node metastasis. J Cancer Res Ther. 2022;18:496–502. Article PubMed Google Scholar 19. Kim K, Kim SS, Shin KH, et al. Aggressive Surgical excision of Supraclavicular Lymph Node did not improve the outcomes of breast Cancer with Supraclavicular Lymph Node involvement (KROG 16 – 14). Clin Breast Cancer. 2020;20:51–60. Article PubMed Google Scholar 20. Noh JM, Kim KH, Park W, et al. Prognostic significance of nodal involvement region in clinical stage IIIc breast cancer patients who received primary systemic treatment, surgery, and radiotherapy. Breast (Edinburgh Scotland). 2015;24:637–41. Article PubMed Google Scholar 21. Chen QT, Zeng LY, Ouyang DJ, et al. Surgery of the primary tumor offers Survival benefits of breast Cancer with Synchronous Ipsilateral Supraclavicular Lymph Node Metastasis. World J Surg. 2020;44:1163–72. Article PubMed Google Scholar 22. Hasebe T, Iwasaki M, Hojo T, et al. Histological factors for accurately predicting first locoregional recurrence of invasive ductal carcinoma of the breast. Cancer Sci. 2013;104:1252–61. Article CAS PubMed PubMed Central Google Scholar 23. Berruti A, Amoroso V, Gallo F, et al. Pathologic complete response as a potential surrogate for the clinical outcome in patients with breast cancer after neoadjuvant therapy: a meta-regression of 29 randomized prospective studies. J Clin Oncology: Official J Am Soc Clin Oncol. 2014;32:3883–91. Article Google Scholar 24. Zhang GC, Zhang YF, Xu FP, et al. Axillary lymph node status, adjusted for pathologic complete response in breast and axilla after neoadjuvant chemotherapy, predicts differential disease-free survival in breast cancer. Curr Oncol (Toronto Ont). 2013;20:e180–192. Article CAS Google Scholar Download references Acknowledgements Not applicable. Funding This research was supported by Chinese Society of Clinical Oncology (NO. Y-HR2016-154). Author information Authors and Affiliations Breast Center, West China Hospital, Sichuan University, No. 37, Guoxue Alley, Chengdu, 610041, China Yao Chen, YuNa Zhang, Jie Chen & XiaoDong Wang 2. Department of Head and Neck Oncology, West China Hospital, Sichuan University, Chengdu, China JinLan He 3. West China School of Medicine, West China Hospital, Sichuan University, Chengdu, China TianYi Song 4. Department of Radiation Oncology, Cancer Center, West China Hospital, No. 37, Guoxue Alley, Chengdu, 610041, China Yan Li 5. Lung Cancer Center, West China Hospital, Sichuan University, Chengdu, Sichuan, China Yan Li Authors Yao Chen View author publications Search author on:PubMed Google Scholar 2. JinLan He View author publications Search author on:PubMed Google Scholar 3. TianYi Song View author publications Search author on:PubMed Google Scholar 4. YuNa Zhang View author publications Search author on:PubMed Google Scholar 5. Jie Chen View author publications Search author on:PubMed Google Scholar 6. XiaoDong Wang View author publications Search author on:PubMed Google Scholar 7. Yan Li View author publications Search author on:PubMed Google Scholar Contributions Yao Chen wrote the main manuscript text. JinLan He, TianYi Song and YuNa Zhang edited the text. All authors reviewed the manuscript. Yao Chen and JinLan He completed the conceptualization, methodology and validation. TianYi Song, YuNa Zhang, Jie Chen, XiaoDong Wang and Yan Li prepare the figures and tables. Corresponding authors Correspondence to XiaoDong Wang or Yan Li. Ethics declarations Ethical approval The current study followed the Declaration of Helsinki. The study protocols were approved by the ethics committee of West China Hospital of Sichuan University (ethical approval code:2022-(385)). We had obtained informed consent from all study participants. Consent for publication Not applicable. Competing interests The authors declare no competing interests. Additional information Publisher’s note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Electronic supplementary material Below is the link to the electronic supplementary material. Supplementary Material 1 Rights and permissions Open Access This article is licensed under a Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International License, which permits any non-commercial use, sharing, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if you modified the licensed material. You do not have permission under this licence to share adapted material derived from this article or parts of it. The images or other third party material in this article are included in the article’s Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article’s Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit Reprints and permissions About this article Cite this article Chen, Y., He, J., Song, T. et al. Comparing supraclavicular surgery with radiotherapy versus radiotherapy alone in breast cancer patients with ipsilateral supraclavicular lymph node metastasis: a two-center retrospective cohort study. BMC Cancer 24, 1572 (2024). Download citation Received: Accepted: Published: DOI: Share this article Anyone you share the following link with will be able to read this content: Sorry, a shareable link is not currently available for this article. Provided by the Springer Nature SharedIt content-sharing initiative Keywords Breast neoplasms Outcomes Lymphatic metastasis Lymph node excision Ipsilateral supraclavicular lymph node metastasis Supraclavicular lymph node dissection BMC Cancer ISSN: 1471-2407
189318
https://www.collinsdictionary.com/us/dictionary/english/elite
ELITE definition in American English | Collins English Dictionary - [x] - [x] TRANSLATOR LANGUAGE GAMES SCHOOLS BLOG RESOURCES More [x] English Dictionary [x] English English Dictionary English Thesaurus English Word Lists COBUILD English Usage [x] English Grammar Easy Learning Grammar COBUILD Grammar Patterns English Conjugations English Sentences [x] English ⇄ French English-French Dictionary French-English Dictionary Easy Learning French Grammar French Pronunciation Guide French Conjugations French Sentences [x] English ⇄ German English-German Dictionary German-English Dictionary Easy Learning German Grammar German Conjugations German Sentences [x] English ⇄ Italian English-Italian Dictionary Italian-English Dictionary Easy Learning Italian Grammar Italian Conjugations Italian Sentences [x] English ⇄ Spanish English-Spanish Dictionary Spanish-English Dictionary Easy Learning Spanish Grammar Easy Learning English Grammar in Spanish Spanish Pronunciation Guide Spanish Conjugations Spanish Sentences [x] English ⇄ Portuguese English-Portuguese Dictionary Portuguese-English Dictionary Easy Learning Portuguese Grammar Portuguese Conjugations [x] English ⇄ Hindi English-Hindi Dictionary Hindi-English Dictionary [x] English ⇄ Chinese English-Simplified Dictionary Simplified-English Dictionary English-Traditional Dictionary Chinese-Traditional Dictionary [x] English ⇄ Korean English-Korean Dictionary Korean-English Dictionary [x] English ⇄ Japanese English-Japanese Dictionary Japanese-English Dictionary English French German Italian Spanish Portuguese Hindi Chinese Korean Japanese More [x] English Italiano Português 한국어 简体中文 Deutsch Español हिंदी 日本語 English French German Italian Spanish Portuguese Hindi Chinese Korean Japanese DefinitionsSummarySynonymsSentencesPronunciationCollocationsConjugationsGrammar Credits × Definition of 'elite' COBUILD frequency band elite (ɪl i t, eɪ-) Word forms:plural elites 1.countable noun You can refer to the most powerful, rich, or talented people within a particular group, place, or society as the elite. ...a government comprised mainly of the elite. Synonyms:aristocracy, best, pick, electMore Synonyms of elite 2.adjective [ADJ n] Elite people or organizations are considered to be the best of their kind. ...the elite troops of the president's bodyguard. Synonyms:leading, best, finest, pickMore Synonyms of elite Collins COBUILD Advanced Learner’s Dictionary. Copyright © HarperCollins Publishers American English pronunciation ! It seems that your browser is blocking this video content. To access it, add this site to the exceptions or modify your security settings, then refresh this page. British English pronunciation ! It seems that your browser is blocking this video content. To access it, add this site to the exceptions or modify your security settings, then refresh this page. You may also like English Quiz Confusables Synonyms of 'elite' Language Lover's Blog French Translation of 'elite' Translate your text Pronunciation Playlists Word of the day: 'E hoa' Spanish Translation of 'elite' English Grammar Collins Apps COBUILD frequency band elite in American English (ɪˈlit; eɪˈlit) noun [also with pl. v.] the group or part of a group selected or regarded as the finest, best, most distinguished, most powerful, etc. 2. a size of type for typewriters, measuring twelve characters to the linear inch adjective 3. of, forming, or suitable for an elite Webster’s New World College Dictionary, 4th Edition. Copyright © 2010 by Houghton Mifflin Harcourt. All rights reserved. Word origin Fr élite< OFr eslite, fem. pp. of eslire, to choose < VL exligere, for L eligere: see elect COBUILD frequency band elite in American English (ɪˈlit, eiˈlit) noun 1.(often used with a pl. v.) the choice or best of anything considered collectively, as of a group or class of persons 2.(used with a pl. v.) persons of the highest class Only the elite were there 3. a group of persons exercising the major share of authority or influence within a larger group the power elite of a major political party 4. a type, approximately 10-point in printing-type size, widely used in typewriters and having 12 characters to the inch Comparepica 1 (sense 3) adjective 5. representing the most choice or select; best an elite group of authors Also: élite Most material © 2005, 1997, 1991 by Penguin Random House LLC. Modified entries © 2019 by Penguin Random House LLC and HarperCollins Publishers Ltd Word origin [1350–1400; ME elit a person elected to office ‹ MF e(s)lit ptp. of e(s)lire to choose; seeelect] COBUILD frequency band elite in British English or élite (ɪˈliːt, eɪ- ) noun 1.(sometimes functioning as plural) the most powerful, rich, gifted, or educated members of a group, community, etc Also called: twelve pitch a typewriter type size having 12 characters to the inch adjective 3. of, relating to, or suitable for an elite; exclusive Collins English Dictionary. Copyright © HarperCollins Publishers Word origin C18: from French, from Old French eslit chosen, from eslire to choose, from Latin ēligere to elect Examples of 'elite' in a sentence elite These examples have been automatically selected and may contain sensitive content that does not reflect the opinions or policies of Collins, or its parent company HarperCollins. We welcome feedback: report an example sentence to the Collins team. Read more… Elite football without fans both looks and sounds weird. The Guardian (2015) But church leaders are not called to be elite executivemanagers. The Guardian (2015) But is the success for elite sports people reflected in the wider public? The Guardian (2016) How common is asthma in elite athletes? The Guardian (2016) The labrador pups that this documentaryshadows for two years are an elite band. The Guardian (2019) They can then choose an elite group from within that elite. Times, Sunday Times (2010) He wants to find out more about elite performance. Times, Sunday Times (2014) Europe's political elite is wrong too. Times, Sunday Times (2011) The elite European clubcompetition is in its finalseason in its existing format. Times, Sunday Times (2013) It was flanked by dozens of heavily armed members of the country 's military elite. Times, Sunday Times (2014) Related word partners elite [x] academic elite elite college elite force elite group elite level elite player elite school elite society elite squad elite team elite unit intellectual elite landed elite media elite political elite privileged elite ruling elite social elite urban elite wealthy elite Show more... Trends of elite View usage over: Source: Google Books Ngram Viewer In other languages elite British English: elite NOUN /ɪˈliːt/ You can refer to the most powerful, rich, or talented people within a particular group, place, or society as the elite. ...a government comprised mainly of the elite. American English: elite /ɪˈlit, eɪ-/ Brazilian Portuguese: elite Chinese: 精英 European Spanish: élite French: élite German: Elite Italian: élite Japanese: エリート Korean: 엘리트 European Portuguese: elite Spanish: élite Thai: ชนชั้นนำ, ชนชั้นสูง, กลุ่มหัวกะทิ British English: elite ADJECTIVE /ɪˈliːt/ Elite people or organizations are considered to be the best of their kind. ...the elite troops of the President's bodyguard. American English: elite /ɪˈlit, eɪ-/ Brazilian Portuguese: de elite Chinese: 精英的 European Spanish: de élite French: d'élite German: Elite- Italian: scelto Japanese: エリートの Korean: 선발된 European Portuguese: de elite Spanish: de élite Thai: ชั้นเยี่ยม Translate your text for free Browse alphabetically elite elision elisor Elissa elite elite athlete elite circles elite club All ENGLISH words that begin with 'E' Related terms of elite elite club elite force elite group elite level elite sport View more related words Wordle Helper ------------- Scrabble Tools -------------- Quick word challenge Quiz Review Question: 1 Score: 0 / 5 PETS What is this an image of? parrot ferret rabbit guinea pig PETS What is this an image of? rat goldfish hamster cat PETS Drag the correct answer into the box. dog canary rat hamster PETS Drag the correct answer into the box. ferret parrot mouse goldfish PETS What is this an image of? cat hamster goldfish mouse Your score: Check See the answer Next Next quiz Review New collocations added to dictionary Collocations are words that are often used together and are brilliant at providing natural sounding language for your speech and writing. February 13, 2020 Read more Study guides for every stage of your learning journey Whether you're in search of a crossword puzzle, a detailed guide to tying knots, or tips on writing the perfect college essay, Harper Reference has you covered for all your study needs. Read more Updating our Usage There are many diverse influences on the way that English is used across the world today. We look at some of the ways in which the language is changing. Read our series of blogs to find out more. Read more Area 51, Starship, and Harvest Moon: September’s Words in the News I’m sure a lot of people would agree that we live in strange times. But do they have to be so strange that Area 51 is making headlines? And what’s this about fish the look like aliens. September’s Words in the News explain all. Read more Junior Scrabble: Picture clues Keep your kids entertained with this engaging puzzle! Find the words from the picture clues, and from the meanings of the words provided. Read more Rein, reign or rain? Explore the differences between "rein," "reign," and "rain" in this concise guide, highlighting their meanings and uses. Read more Scrabble: thinking positive Thinking positive is one of the best ways to get the most out of Scrabble - you can always do something when faced with a disappointing bunch of letters. Read more Rack or wrack? Discover the correct usages of "rack your brains," "rack and ruin," and "nerve-racking" in this informative guide. Read more Collins English Dictionary Apps Download our English Dictionary apps - available for both iOS and Android. Read more Collins Dictionaries for Schools Our new online dictionaries for schools provide a safe and appropriate environment for children. And best of all it's ad free, so sign up now and start using at home or in the classroom. Read more Word lists We have almost 200 lists of words from topics as varied as types of butterflies, jackets, currencies, vegetables and knots! Amaze your friends with your new-found knowledge! Read more Create an account and sign in to access this FREE content Register now or log in to access Elevate your vocabulary Sign up to our newsletter to receive our latest news,exclusive content and offers. Sign up now Collins Dictionaries [x] Browse all official Collins dictionaries About Collins [x] About Us Contact Us FAQs Cookies Settings Terms & Conditions Privacy Policy California Privacy Rights Do Not Sell My Personal Information Security Useful Links [x] Advertise with us B2B Partnerships Collins COBUILD Collins ELT Dictionary API HarperCollins Publishers Word Banks © Collins 2025 × Register for free on collinsdictionary.com Unlock this page by registering for free on collinsdictionary.com Access the entire site, including our language quizzes. Customize your language settings. (Unregistered users can only access the International English interface for some pages.) Submit new words and phrases to the dictionary. Benefit from an increased character limit in our Translator tool. Receive our weekly newsletter with the latest news, exclusive content, and offers. Be the first to enjoy new tools and features. It is easy and completely free! REGISTER Maybe later Already registered?Log in here Collins TRANSLATOR LANGUAGE English English Dictionary Thesaurus Word Lists Grammar English Easy Learning Grammar English Grammar in Spanish Grammar Patterns English Usage Teaching Resources Video Guides Conjugations Sentences Video Learn English Video pronunciations Build your vocabulary Quiz English grammar English collocations English confusables English idioms English images English usage English synonyms Thematic word lists French English to French French to English Grammar Pronunciation Guide Conjugations Sentences Video Build your vocabulary Quiz French confusables French images German English to German German to English Grammar Conjugations Sentences Video Build your vocabulary Quiz German confusables German images Italian English to Italian Italian to English Grammar Conjugations Sentences Video Build your vocabulary Quiz Italian confusables Italian images Spanish English to Spanish Spanish to English Grammar English Grammar in Spanish Pronunciation Guide Conjugations Sentences Video Build your vocabulary Spanish grammar Portuguese English to Portuguese Portuguese to English Grammar Conjugations Video Build your vocabulary Hindi English to Hindi Hindi to English Video Build your vocabulary Chinese English to Simplified Simplified to English English to Traditional Traditional to English Quiz Mandarin Chinese confusables Mandarin Chinese images Traditional Chinese confusables Traditional Chinese images Video Build your vocabulary Korean English to Korean Korean to English Video Build your vocabulary Japanese English to Japanese Japanese to English Video Build your vocabulary GAMES Quiz English grammar English collocations English confusables English idioms English images English usage English synonyms Thematic word lists French French images German grammar German images Italian Italian images Mandarin Chinese Traditional Chinese Spanish Scrabble Wordle Helper Collins Conundrum SCHOOLS School Home Primary School Secondary School BLOG RESOURCES Resources Collins Word of the Day Paul Noble Method Word of the Year Collins API By clicking “Accept All Cookies”, you agree to the storing of cookies on your device to enhance site navigation, analyze site usage, and assist in our marketing efforts. Cookies Settings Accept All Cookies
189319
https://openstax.org/books/introductory-business-statistics-2e/pages/2-3-measures-of-the-center-of-the-data
Loading [MathJax]/jax/element/mml/optable/GeneralPunctuation.js Skip to Content Go to accessibility page Keyboard shortcuts menu Log in Introductory Business Statistics 2e 2.3 Measures of the Center of the Data Introductory Business Statistics 2e 2.3 Measures of the Center of the Data Search for key terms or text. The "center" of a data set is also a way of describing location. The two most widely used measures of the "center" of the data are the mean (average) and the median. To calculate the mean weight of 50 people, add the 50 weights together and divide by 50. Technically this is the arithmetic mean. We will discuss the geometric mean later. To find the median weight of the 50 people, order the data and find the number that splits the data into two equal parts meaning an equal number of observations on each side. The weight of 25 people are below this weight and 25 people are heavier than this weight. The median is generally a better measure of the center when there are extreme values or outliers because it is not affected by the precise numerical values of the outliers. The mean is the most common measure of the center. NOTE The words “mean” and “average” are often used interchangeably. The substitution of one word for the other is common practice. The technical term is “arithmetic mean” and “average” is technically a center location. Formally, the arithmetic mean is called the first moment of the distribution by mathematicians. However, in practice among non-statisticians, “average" is commonly accepted for “arithmetic mean.” When each value in the data set is not unique, the mean can be calculated by multiplying each distinct value by its frequency and then dividing the sum by the total number of data values. The letter used to represent the sample mean is an x with a bar over it (pronounced “x bar”): x–. The Greek letter μ (pronounced "mew") represents the population mean. One of the requirements for the sample mean to be a good estimate of the population mean is for the sample taken to be truly random. To see that both ways of calculating the mean are the same, consider the sample: 1; 1; 1; 2; 2; 3; 4; 4; 4; 4; 4 x–=1+1+1+2+2+3+4+4+4+4+411=2.7 x¯=3(1)+2(2)+1(3)+5(4)11=2.7 In the second calculation, the frequencies are 3, 2, 1, and 5. You can quickly find the location of the median by using the expression n+12. The letter n is the total number of data values in the sample. If n is an odd number, the median is the middle value of the ordered data (ordered smallest to largest). If n is an even number, the median is equal to the two middle values added together and divided by two after the data has been ordered. For example, if the total number of data values is 97, then n+12= 97+12 = 49. The median is the 49th value in the ordered data. If the total number of data values is 100, then n+12= 100+12 = 50.5. The median occurs midway between the 50th and 51st values. The location of the median and the value of the median are not the same. The upper case letter M is often used to represent the median. The next example illustrates the location of the median and the value of the median. Example 2.24 Problem A hospital administrator keeps track of the ages (in years) of patients visiting the emergency room over a one-week period (data are sorted from smallest to largest): 3; 4; 8; 8; 10; 11; 12; 13; 14; 15; 15; 16; 16; 17; 17; 18; 21; 22; 22; 24; 24; 25; 26; 26; 27; 27; 29; 29; 31; 32; 33; 33; 34; 34; 35; 37; 40; 44; 44; 47; Calculate the mean and the median. Solution The calculation for the mean is: x–=[3+4+(8)(2)+10+11+12+13+14+(15)(2)+(16)(2)+...+35+37+40+(44)(2)+47]40=23.6 To find the median, M, first use the formula for the location. The location is: n+12=40+12=20.5 Starting at the smallest value, the median is located between the 20th and 21st values (the two 24s): 3; 4; 8; 8; 10; 11; 12; 13; 14; 15; 15; 16; 16; 17; 17; 18; 21; 22; 22; 24; 24; 25; 26; 26; 27; 27; 29; 29; 31; 32; 33; 33; 34; 34; 35; 37; 40; 44; 44; 47; M=24+242=24 Try It 2.24 The following data show the number of months patients typically wait on a transplant list before getting surgery. The data are ordered from smallest to largest. Calculate the mean and median. 3; 4; 5; 7; 7; 7; 7; 8; 8; 9; 9; 10; 10; 10; 10; 10; 11; 12; 12; 13; 14; 14; 15; 15; 17; 17; 18; 19; 19; 19; 21; 21; 22; 22; 23; 24; 24; 24; 24 Example 2.25 Problem Suppose that in a small town of 50 people, one person earns $5,000,000 per year and the other 49 each earn $30,000. Which is the better measure of the "center": the mean or the median? Solution x¯=5,000,000+49(30,000)50=129,400 M = 30,000 (There are 49 people who earn $30,000 and one person who earns $5,000,000.) The median is a better measure of the "center" than the mean because 49 of the values are 30,000 and one is 5,000,000. The 5,000,000 is an outlier. The 30,000 gives us a better sense of the middle of the data. Try It 2.25 In a sample of 60 households, one house is worth $2,500,000. Twenty-nine houses are worth $280,000, and all the others are worth $315,000. Which is the better measure of the “center”: the mean or the median? Another measure of the center is the mode. The mode is the most frequent value. There can be more than one mode in a data set as long as those values have the same frequency and that frequency is the highest. A data set with two modes is called bimodal. Example 2.26 Statistics exam scores for 20 students are as follows: 50; 53; 59; 59; 63; 63; 72; 72; 72; 72; 72; 76; 78; 81; 83; 84; 84; 84; 90; 93 Problem Find the mode. Solution The most frequent score is 72, which occurs five times. Mode = 72. Try It 2.26 The number of books checked out from the library from 25 students are as follows: 0; 0; 0; 1; 2; 3; 3; 4; 4; 5; 5; 7; 7; 7; 7; 8; 8; 8; 9; 10; 10; 11; 11; 12; 12 Find the mode. Example 2.27 Five real estate exam scores are 430, 430, 480, 480, 495. The data set is bimodal because the scores 430 and 480 each occur twice. When is the mode the best measure of the "center"? Consider a weight loss program that advertises a mean weight loss of six pounds the first week of the program. The mode might indicate that most people lose two pounds the first week, making the program less appealing. NOTE The mode can be calculated for qualitative data as well as for quantitative data. For example, if the data set is: red, red, red, green, green, yellow, purple, black, blue, the mode is red. Try It 2.27 Five credit scores are 680, 680, 700, 720, 720. The data set is bimodal because the scores 680 and 720 each occur twice. Consider the annual earnings of workers at a factory. The mode is $25,000 and occurs 150 times out of 301. The median is $50,000 and the mean is $47,500. What would be the best measure of the “center”? Calculating the Arithmetic Mean of Grouped Frequency Tables When only grouped data is available, you do not know the individual data values (we only know intervals and interval frequencies); therefore, you cannot compute an exact mean for the data set. What we must do is estimate the actual mean by calculating the mean of a frequency table. A frequency table is a data representation in which grouped data is displayed along with the corresponding frequencies. To calculate the mean from a grouped frequency table we can apply the basic definition of mean: mean = data sumnumber of data values We simply need to modify the definition to fit within the restrictions of a frequency table. Since we do not know the individual data values we can instead find the midpoint of each interval. The midpoint is lower boundary+upper boundary2. We can now modify the mean definition to be Mean of Frequency Table=∑fm∑f where f = the frequency of the interval and m = the midpoint of the interval. Example 2.28 Problem A frequency table displaying professor Blount’s last statistic test is shown. Find the best estimate of the class mean. | Grade interval | Number of students | --- | | 50–56.5 | 1 | | 56.5–62.5 | 0 | | 62.5–68.5 | 4 | | 68.5–74.5 | 4 | | 74.5–80.5 | 2 | | 80.5–86.5 | 3 | | 86.5–92.5 | 4 | | 92.5–98.5 | 1 | Table 2.26 Solution Find the midpoints for all intervals | Grade interval | Midpoint | --- | | 50–56.5 | 53.25 | | 56.5–62.5 | 59.5 | | 62.5–68.5 | 65.5 | | 68.5–74.5 | 71.5 | | 74.5–80.5 | 77.5 | | 80.5–86.5 | 83.5 | | 86.5–92.5 | 89.5 | | 92.5–98.5 | 95.5 | Table 2.27 Calculate the sum of the product of each interval frequency and midpoint.∑​fm 53.25(1)+59.5(0)+65.5(4)+71.5(4)+77.5(2)+83.5(3)+89.5(4)+95.5(1)=1460.25 μ=∑fm∑f=1460.2519=76.86 Try It 2.28 A researcher conducted a study on the effect that playing video games has on memory recall. As part of the study, they compiled the following " alt="Rice University logo" class="styled__FooterLogo-y5hgq4-26 eDfxRu">
189320
https://publishedresearch.cambridgeassociates.com/wp-content/uploads/2014/12/Making-Sense-of-U.S.-Equity-Earnings-US-Sept-2002.pdf
U.S. MARKET COMMENT: MAKING SENSE OF U.S. EQUITY EARNINGS September 2002 Copyright © 2002 by Cambridge Associates LLC. All rights reserved. This report may not be displayed, reproduced, distributed, transmitted or used to create derivative works in any form, in whole or in portion, by any means, without written permission from Cambridge Associates LLC. Copying of this publication is a violation of federal copyright laws (17 U.S.C. 101 et seq.). Violators of this copyright may be subject to liability for substantial monetary damages. The information and material published in this report are confidential and non-transferable. This means that authorized members may not disclose any information or material derived from this report to third parties, or use information or material from this report, without the prior written authorization of Cambridge Associates LLC. An authorized member may disclose information or material from this report to its staff, trustees, or Investment Committee with the understanding that these individuals will treat it confidentially. Additionally, information from this report may be disclosed if disclosure is required by law or court order, but members are required to provide notice to Cambridge Associates LLC reasonably in advance of such disclosure. This report is provided for informational purposes only. It is not intended to constitute an offer of securities of any of the issuers that are described in the report. This report is provided only to persons that Cambridge Associates LLC believes to be "Accredited Investors" as that term is defined in Regulation D under the Securities Act of 1933. The recipient of this report may not provide it to any other person without the consent of Cambridge Associates LLC. Investors should completely review all Fund offering materials before considering an investment. No part of this report is intended as a recommendation of any firm or any security. Factual information contained herein about investment firms and their returns which has not been independently verified has generally been collected from the firms themselves through the mail. We can neither assure nor accept responsibility for accuracy, but substantial legal liability may apply to misrepresentations of results delivered through the mail. The CA Manager Medians are derived from Cambridge Associates LLC's proprietary database covering investment managers. Cambridge Associates LLC does not necessarily endorse or recommend the managers in this universe. Performance results are generally gross of investment management fees and do not include returns for discontinued managers. Mike Walden Celia Dallas Karen Ross U.S. Market Comment September 2002 CONTENTS Making Sense of U.S. Equity Earnings ................................................................................................... 1 Tables A Stock Buybacks ..................................................................................................................... 11 B S&P and NIPA Earnings Growth .......................................................................................... 12 C S&P 500 Earnings: Reported and Operating Earnings ......................................................... 13 D Price-to-Earnings Ratios Under Various Earnings Definitions............................................. 14 E S&P 500 Dividend Discount Model Valuations Under Varying Assumptions .................... 16 U.S. Market Comment 1 September 2002 MAKING SENSE OF U.S. EQUITY EARNINGS "If the stock price goes up, why should anyone complain, even if earnings are being manipulated a bit? This game only works as long as the bull market continues. In a bear market, the most aggressive players are likely to have the biggest falls. Investors have a right to be better informed about earnings. They may be putting too much of their capital in the stocks of companies that have good earnings managers rather than good business managers." Dr. Edward Yardeni—August 16, 1999 In the fall of 2001, we evaluated the difference between Generally Accepted Accounting Principles (GAAP), or reported, earnings and company-defined operating earnings.1 The impetus for this analysis was the growing disparity between earnings measures—operating earnings were a record 89.8% higher than reported earnings—and the low, but accurate drum beat of pundits like Yardeni, who had been warning of the earnings game for years. The subsequent meltdown of companies such as Enron, Global Crossing, and WorldCom revealed that investors had been misled by the legal, but questionable treatment of certain accounting items, as well as vehicles financially engineered to deceive (e.g., asset swaps and special purpose entities). These revelations have led to a renewed focus on the inconsistency between corporate reality and accounting statements, and a search for a more genuine measure of earnings. As a result, we compare the merits and limitations of four prominent definitions of corporate earnings—reported earnings, National Income Product Accounts (NIPA) earnings, S&P Core earnings, and operating earnings—and provide our view of an ideal definition of earnings. Finally, we assess the relative value of U.S. equities using these various earnings figures. Earnings Definitions The aftermath of the bull market has included a good old-fashioned corporate scrubbing, in which aggressive earnings tactics that were ignored in boom times are scrutinized to determine the quality of reported earnings and the extent to which investors have been misled. It seems as if each time the Financial Standards Accounting Board (FASB) proposes an amendment to the treatment of GAAP items, additional mistreated items surface. In addition, operating earnings have been revealed to be too optimistic, as they are typically more reflective of corporate skill in managing earnings, than of skill in managing operations. As a result, investors have begun to focus on other earnings series, such as the NIPA earnings 1 See our November 2001 report, Writing Down Current U.S. Equity Valuations. U.S. Market Comment 2 September 2002 figures maintained by the Bureau of Economic Analysis and the newly developed S&P Core earnings. The table below provides a comparison of these earnings definitions, as well as a definition of our ideal earnings measure. DEFINITIONS OF EARNINGS Included in? S&P Operating Ideal Item GAAP NIPA Core Earnings Earnings Employee Stock Options Expense No Yes Yes No Yes Pension Gains Yes No No Yes No Pension Costs Yes Yes Yes No Yes Gains/Losses on Investment Portfolios Yes No Yes Yes/No Yes Infrequent/Nonrecurring Items Goodwill Impairment Expenses Yes No No No No Gains/Losses on Asset Sales Yes No No Yes/No Yes/No Write-Downs from Discontinued Ops. No No No No No Write-Downs from Continuing Ops. Yes No Yes No Yes Merger/Acquisition-Related Costs Yes Yes No No Yes Severance from Continuing Ops. Yes Yes Yes No Yes Depreciation Yes Yes Yes Yes Yes The general tendency with company-reported operating earnings has been to include gains from investments and asset sales, while excluding charges for investment or asset sale losses. Asset sales that are part of long-term strategic operations should be included, such as asset sales and purchases for financial corporations, or large conglomerates, while other asset gains and losses would be excluded. NIPA makes depreciation adjustments—inventory valuation and capital consumption adjustments—to reflect the current replacement costs of inventory and other assets. U.S. Market Comment 3 September 2002 Given that no single measure of earnings will be truly ideal and universally admired, it is important to consider the major advantages and limitations of these different earnings definitions. The Trouble with GAAP While GAAP has the advantage of a widely known and consistently applied (relative to operating earnings) definition, it suffers from three primary limitations: the exclusion of a requirement to report employee options as expenses, the inclusion of pension gains, and the inclusion of goodwill impairment expenses. Exclusion of Employee Stock Options. The current GAAP treatment of stock options is mandated by FAS 123 and states that companies have discretion to include an option expense in the income statement, and only are required to disclose the fair value of options in the income statement footnotes. In addition, FAS 123 allows firms to choose the methodology for determining the fair value of its options (e.g., Black Scholes or binomial), but requires that all firms calculate a fully diluted earnings per share figure, assuming all outstanding options have been exercised. The main argument against expensing options is that since options require no outlay of cash, they should not be expensed against net income. However, within the GAAP framework, many expenses, such as the depreciation of long-lived assets, are not actual cash outlays, but rather represent the ongoing cost of doing business. Regardless of the source of financing, GAAP accounting requires that an expense be taken to reflect the use and replacement cost of those assets, and to match costs with revenues generated during the given period. On this basis, employees are no different than other assets and the full cost of hiring, retaining, and compensating employees should be expensed. Expensing options also makes sense because funding options represents an opportunity cost. In general, a firm has three financing options—retained earnings, debt, and equity—that it can use to fund operations.2 These sources are not limitless, and any use of one source of funding either reduces the ability to utilize that resource or increases its cost in the future. Issuing common stock via employee stock options has at least two opportunity costs—the additional funds forgone by not selling the same shares in the open market and a dilution in the value of the stock that can put downward pressure on stock prices, limiting the ability to raise capital at high valuations in the future.3 Of course, many firms repurchase stock, thereby transferring rather than diluting the ownership interest of each share. However, because 2 For simplicity reasons, no distinction is made between forms of equity (i.e., preferred or common) or debt (i.e., bonds or bank loans). 3 GAAP earnings also exclude the tax deduction corporations receive for issuing employee stock options. Current tax laws provide this tax break to avoid double taxation—employees pay income tax on the same value. U.S. Market Comment 4 September 2002 the firm must issue debt or use retained earnings to buy back shares, opportunity costs remain in the context of reduced retained earnings for other purposes and the potential for a higher cost for future debt issuance. According to Steven Zamsky of Morgan Stanley, 60% of debt issued in 1998 was for repurchasing shares. Some of the largest stock option issuers spent an average of 53% of their earned income on share repurchases between 1995 and 2002 (see Table A).4 A significant limitation in expensing options is the question of how to most accurately estimate these expenses. The Black Scholes model has become the most widely used method for determining the value of actively traded options, despite the fact that two of its assumptions—that volatility and risk-free rates remain constant—are unrealistic. While these constant assumptions have little impact when estimating the value of relatively short duration options, they significantly limit the accuracy of values derived for employee stock options, which typically vest five to ten years after issuance. Other limitations include the fact that employees' stock options are not actively traded and that many options are never exercised due to employee attrition and/or falling stock prices. As a result of these shortcomings, Alliance Bernstein takes 50% of the Black Scholes value when deducting the cost of employee stock options from earnings. Under the NIPA earnings definition, options are expensed as the difference between the market value and the strike price at the time of exercise. This has two key advantages. First, only options that are actually exercised are expensed. Second, the value of the expense represents a true cost to the firm: the dollars that were forgone by not selling those shares in the open market. However, the flaw in the NIPA methodology is that it creates a cost/revenue mismatch, since the options are not expensed until several years after they were issued. Coca-Cola, which was one of the first firms to voluntarily expense options, is calculating the value of its options by taking the average of two investment banks' bids. This methodology, albeit appealing, may be impractical for many firms (i.e., those firms without a highly liquid market for their options or investment bankers at their disposal). The cost of options is likely to have had more of an impact in the late 1990s than it will going forward. An Alliance Bernstein analysis estimated that counting options as an expense would have taken 2.5 percentage points off the 9% average annual growth in operating earnings between 1995 and 2000, or 30% of the annual growth over this period. However, given the scrutiny surrounding the ludicrous pay packages of top executives and the diminished expectations of option payoffs, firms are less likely to issue the same volume of options going forward. For example, Morgan Stanley reduces 2002 operating earnings by just 8% to reflect the anticipated cost of options.5 4 Sources: Thomson Financial and Bloomberg Financial Markets. 5 Source: "'True' Earnings," by Barton Biggs, Morgan Stanley, July 29, 2002. U.S. Market Comment 5 September 2002 As of September 13, 2002, 100 companies have announced that they will expense options going forward, with most opting to use the Black Scholes method. In addition, FASB recently proposed three alternatives to assist companies that choose to expense options: the prospective approach, the modified prospective approach, and the modified retroactive approach. The three methodologies vary significantly in expected impact, as the prospective approach only includes those options issued in the current year, while the modified prospective and modified retroactive approaches include options issued since 1995 (inception date of FAS 123). However, the modified prospective approach expenses previously issued options in the current year, while the retroactive approach calls for the restating of previous years' income. UBS Warburg estimates that the prospective, modified prospective, and modified retroactive approaches would lower 2001 earnings by 4%, 6%, and 14%, respectively. Finally, the International Accounting Standards Board will require all companies outside the United States to start expensing options on January 1, 2004. The rules will apply to all options that have been granted and a fair value approach will be required, though the specific model employed will remain at each company's discretion. Inclusion of Pension Gains. By definition, a pension fund represents the accumulated (or estimated) post-retirement benefits of a firm's employees, not the discretionary assets of a firm.6 As such, pension benefits should be accounted for as cost items, just as other forms of employee compensation are treated. The pension gains should not be an addition to earnings because the assets accrued in excess of costs cannot be used to fund operations. Furthermore, while gains do result in a reduction in required contributions, it would be double counting to both reduce the pension cost and report a gain. The second area of contention with pension income under GAAP standards is that firms can manipulate the assumptions used in defining pension gains to increase earnings in a given year. The most common method of boosting earnings relates to the use of the expected return rather than the actual return on portfolio assets to calculate the gain from pension assets.7 For example, many pension funds assume returns of 9% on plan assets, despite the fact that S&P 500 companies realized returns of 7.5% in 2000 and -6.9% in 2001. This enables corporations to report pension gains even when actual asset returns are negative. The impact of removing pension income from earnings is significantly greater than that which would occur if firms were to lower their return assumptions, but continue to include these phantom gains in earnings. Morgan Stanley estimates that S&P 500 operating earnings would fall by 2.4% in 2002 if firms reduced their pension plan return assumptions from 9.2% to 8%. However, removing pension gains from the income statement entirely would have reduced S&P 500 operating income by 5.0%, 5.3%, 7.2%, and 4.5% (estimated) in years 1999 through 2002, respectively.8 6 The rare exceptions to this rule occur when a firm with an overfunded plan liquidates the plan or declares bankruptcy. 7 For a more complete discussion of pension accounting, see our November 2001 report, Writing Down Current U.S. Equity Valuations. 8 Source: "Pensions and the Cash Conundrum," Trevor Harris, Morgan Stanley, July 25, 2002. U.S. Market Comment 6 September 2002 As other definitions of earnings that exclude pension income—namely S&P Core earnings and NIPA—become increasingly popular, FASB may be pressured to revisit this issue. In addition, more than half of all pensions are now underfunded and the 20 largest plan surpluses in the S&P 500, representing 79% of the total S&P 500 surplus, fell from $163 billion in 2000 to $68 billion in 2001. With returns likely to come in negative again in 2002, pension expenses (i.e., cash contributions) may appear more frequently on income statements in 2003. Inclusion of Goodwill Expenses. Goodwill results from use of the purchase method of accounting for acquisitions and it represents the difference between the purchase price of a company and the fair value of its assets. FASB recently decided to end the practice of goodwill amortization, requiring instead that companies take a charge (write-down for the impairment of goodwill) against earnings whenever the value of acquired goodwill falls below its purchase value. The rules are applied retroactively to all acquisitions made under the purchase method. Goodwill impairment tests may serve as a useful audit trail of a firm's acquisitions and certainly provide a useful report card on whether corporations pay a fair price for these acquisitions. However, these charges contribute to a more volatile earning series that can temporarily diverge from a firm's true earnings potential. In addition, the market has proven more efficient and timely at revaluing acquisitions than have income statements. For example, AOL Time Warner had lost more than 80% of its market value by the time the goodwill charges hit its income statement. Impairment expenses combined for a $5 per share, or 17%, reduction in 2001 S&P 500 reported earnings, but the charges were disproportionately concentrated in a few large companies. For example, JDS Uniphase took a goodwill impairment charge of $45 billion in 2001, while AOL Time Warner wrote off $54 billion of goodwill in early 2002, with estimates of future goodwill charges in the range of $25 billion to $50 billion. The Trouble with Operating Earnings. Like GAAP earnings, operating earnings include pension income and exclude employee stock option expenses, but suffer from additional limitations related to the lack of a standard definition and the ease with which corporations can manipulate these earnings to paint themselves in the best light possible. In theory, operating earnings could provide advantages over GAAP reporting, as it enables corporations to exclude those income, expense, gain, and loss items that are truly one-time or infrequent in nature. However, in practice, we have found that operating earnings tend to be inflated, as managers generally have chosen to more broadly include income and gains than expenses and losses, with the latter two categories typically excluded as one-time events regardless of their frequency. U.S. Market Comment 7 September 2002 One-time charges should be reserved for the write-down of expenses that are truly rare in nature. Unfortunately, however, firms often hide operating charges under the one-time category, thus expensing them against reported earnings but excluding them from operating earnings. This practice is most extreme during recessions when firms engage in "big bath accounting," taking advantage of low earnings expectations to take past, present, and future charges against GAAP reported earnings. S&P Core and NIPA Earnings Strike a Useful Balance Both NIPA earnings and S&P's new core earnings measure offer advantages over GAAP earnings in that they include employee stock options expenses, exclude pension gains, and exclude goodwill impairment expenses. In addition, they both attempt to exclude one-time or unusual charges and income, and include those that are truly ongoing and meaningful. S&P Core earnings and NIPA earnings differ in three main areas: gains and losses on investment portfolios, write-downs from continuing operations, and merger/acquisition-related costs. NIPA earnings exclude gains and losses on investment portfolios and write-downs from continuing operations, and include merger/acquisition-related costs, while S&P Core earnings treat each of these items in the opposite manner. Unfortunately, S&P Core earnings data are not yet available, which makes it difficult to ascertain how divergent these two measures will be over time. However, they both provide reasonable and improved earnings measures relative to reported earnings and operating earnings. NIPA earnings, which extend back to 1929,9 show significant trend correlations with other long-term series, but avoid the brief recessionary craters and earnings management that have plagued GAAP data in more recent years. But there are several important caveats for those using the NIPA earnings series. First, NIPA is based on tax accounting, which makes it subject to significant revision in the most recent six to 12 months. Secondly, NIPA earnings (and the market value used for valuations) cover both publicly traded and private corporations, while excluding the financial sector. S&P Core earnings also have some limitations, the most significant of which is a history that will ultimately extend to 1996.10 This makes it difficult to cast valuations based on S&P Core earnings in an historical context. For example, while we know that the long-term average P/E ratio based on GAAP earnings is approximately 15, we do not have a comparable long-term expectation for core earnings. In addition, not all analysts agree that S&P Core earnings offer significant advantages. Trevor Harris of Morgan Stanley feels that S&P Core earnings will create further confusion and suffer from a lack of 9 We use data from Stephen Wright, University of London, to extend NIPA style earnings back to 1900. 10 S&P Core Earnings are expected to be available for the trailing one year by late September, but historical figures to 1996 will be released at a later date. U.S. Market Comment 8 September 2002 balance. For example, Harris argues that S&P's exclusion of pension gains, but inclusion of income from other financial/portfolio assets is illogical—he defines both as financing costs and believes they should be treated equally. An Ideal Earnings Measure While recognizing that there is no such thing as a perfect earnings measure, the following two principles provide a useful basis for defining earnings: earnings should represent income that can be withdrawn without depleting the assets of the firm11 and only income earned on the discretionary assets of the firm should be included in earnings available to shareholders. Our ideal earnings definition includes several key attributes of the S&P Core and NIPA earnings definitions—the inclusion of employee stock option expenses, and the exclusion of pension gains and goodwill impairment charges. However, there are some differences as well. For example, both S&P Core and NIPA earnings exclude gains and losses on asset sales. Recognizing the implementation difficulties, we believe that an ideal earnings measure would allow such transactions to be evaluated on a case-by-case basis. Asset sales represent an intermediate-to long-term strategy for financial companies (e.g., banks and real estate investment trusts) and large conglomerates that cultivate and sell off assets with some frequency over time; therefore, gains and losses from these asset sales should be included. We would agree with S&P and GAAP on the inclusion of gains and losses on investment portfolios, which are excluded under NIPA's definition. Since the ability to fund operations or payout dividends can be directly impacted by the returns earned on the discretionary assets of the firm, the gains or losses on investments should be included in net income. We also agree with GAAP that merger/acquisition-related costs (excluded from S&P Core earnings) and write-downs from continuing operations (excluded from NIPA) should be included in earnings. On balance, S&P Core and NIPA earnings come close to our ideal earnings measure, and do an excellent job of mitigating between the availability of reliable data and the desire to reflect a pure measure of earnings. Quantitative Impact on Earnings An assessment of S&P Core earnings relative to GAAP reported earnings, suggests that 2001 S&P Core earnings would have been 83% of reported GAAP earnings (excluding JDS Uniphase write-downs of $56 billion) and 66% of operating earnings, or approximately $30 per share. Option expenses 11 This definition was originally postulated by Nobel Laureate John Hicks, and can be further referenced in, "The Trouble With Earnings," Peter L. Bernstein, September 1, 2002. U.S. Market Comment 9 September 2002 make up the majority of the disparity with GAAP earnings, as the effects of other differences merely canceled each other out. A comparison of historical S&P 500 and NIPA earnings reveals that NIPA earnings growth (9.1%) was 2.1 percentage points higher than S&P reported earnings growth (7.0%) between 1900 and 2002 (see Table B). Much of the divergence occurred in 2001, when NIPA earnings growth was 3.1% compared to -50.6% for S&P reported earnings. Although the NIPA data are unlikely to be as artificially depressed as GAAP data because of its exclusion of many infrequent or nonrecurring items, 2001 NIPA earnings are likely to undergo revision. NIPA earnings growth significantly lagged that of S&P reported earnings from 1986-2000 and that of operating earnings from 1986-2002, most likely because NIPA earnings include option expenses and exclude pension gains.12 Valuations While the definitions of earnings can vary greatly, when comparing current price-to-earnings (P/E) ratios of equities with their own historical valuations, equities are at or near fair value according to most earnings measures (see Table D). The P/E ratios using trailing operating earnings are slightly below fair value, while those using forward operating earnings and normalized GAAP earnings (trendline) are 0.4 and 0.2 standard deviations above fair value, respectively. Normalized GAAP earnings based on a trailing five-year average show a slightly higher valuation, 0.8 standard deviations above fair value, while GAAP reported P/Es remain the outlier at 28.6, or approximately 2.3 standard deviations above fair value. However, it should be noted that GAAP earnings remain artificially depressed by inventory and goodwill write-offs, which amounted to $48.5 billion or $5 per share in the second quarter of 2002.13 In fact, normalizing GAAP earnings provides a truer picture of valuation trends, since this tends to smooth out the craters and peaks in GAAP data. P/E ratios using S&P normalized GAAP earnings (trendline and five-year historical average), NIPA earnings, and S&P trailing operating earnings are in a relatively tight range of 20.9 to 17.6 as of September 30, 2002. The similar valuation readings produced by these earnings definitions provides a rather strong consensus that equity markets are approximately fairly valued. However, given the opacity of earnings, many researchers and pundits have diverged from relying solely on P/E ratios—a move we encourage—and have focused on dividend discount models, and other valuation ratios. 12 The starting point for comparison is 1986 because this is the first year for which earnings growth data is available for S&P historical operating earnings. 13 Source: "Slow Grow: Not Much of a Profit Rebound Yet," by Edward Kerschner, UBS Paine Webber, September 30, 2002. U.S. Market Comment 10 September 2002 Dividend discount models are highly sensitive to the inputs used, and not surprisingly, the results using different types of earnings are somewhat divergent. As shown in Table E, assuming a 3% equity risk premium, a 4.83% risk-free rate (the yield on the 30-year Treasury), 7% earnings growth over the next ten years, and 5% earnings growth thereafter, the S&P 500 is only 7% overvalued (essentially a fair value reading) using operating earnings per share of $46, but is 23% overvalued (slightly overvalued) using reported earnings of $29. While the model suggests that the S&P 500 is more expensive when using reported earnings than when using operating earnings, the assumptions above assume that both earnings grow at the same rate over the next ten years. Given that reported earnings are at depressed levels and have contracted by a much wider margin than have operating earnings, it is reasonable to assume that reported earnings will grow slightly faster than operating earnings in the short- to intermediate-term. Another way of using the model is to solve for the average annual earnings growth required over the next ten years for the S&P 500 to be fairly valued at today's prices. Using the above assumptions, reported earnings must grow 9.5%, and operating earnings must grow 6.1% in order for the market to be fairly valued. Adjusted for the roughly 1.5% inflation expectations priced into the bond market for the next ten years, reported earnings would need to grow about 8% and operating earnings, 4.5%, in real terms. These growth rates are high relative to historical averages, but are reasonable given the 63.5% decline in earnings between September 30, 2000 and March 31, 2002. Since 1926, net income has grown an average of 5.6% over rolling five-year periods, and 5.8% over rolling ten-year periods. In real terms, earnings have grown an average of 2.1% over both five- and ten-year periods. We would expect that earnings growth following periods of significant earnings decline to be above average, which historically has been the case. There are two periods for which earnings fell a comparable amount to the current period: reported earnings fell 59% (60% in real terms) in 1937-38 and fell 75% (67% in real terms) in 1929-32. For the ten years following these periods, earnings increased at an average annual rate of approximately 7% in real terms for both periods, and 12% and 10% in nominal terms, respectively. The risk that economic conditions will deteriorate remains; however, that appears to be discounted into the market today, as earnings expectations have fallen throughout the year, and have been significantly trimmed in recent weeks. Given that most P/E measures are at or approaching fair value and that dividend discount model analysis suggests similar valuation levels on the basis of operating and reported earnings, we now characterize the S&P 500 as fairly valued. Reported earnings are still somewhat depressed relative to historical reported earnings, and yet dividend discount model analysis still suggests the market is approximately fairly valued using reported earnings. This occurs because the model accounts for the higher than average earnings growth expectations over the next ten years and the low interest rate environment. However, we would caution that as a bear market progresses, it typically passes from overvalued to fairly valued, to undervalued on the way to the bottom. U.S. Market Comment 11 September 2002 Buybacks ($ bil) Earnings ($ bil) Buybacks/Earnings (%) Average of Seven Firms $21.04 $41.38 53.3 -6% -5% -4% -3% -2% -1% 0% 1% 2% 3% 4% 1900 1906 1912 1920 1926 1932 1938 1944 1950 1956 1962 1968 1974 1980 1986 1992 1998 Percent (%) Sources: Top Chart: Thomson Financial, Bloomberg Financial Markets, and Business Week. Bottom Chart: "Measures of Stock Market Value and Returns for the US Nonfinancial Corporate Sector, 1990-2002," by Stephen Wright, BEA, and the Federal Reserve. Percent of Net New Issues by Market Value 1900-2002 Seven Largest Share Buyback Programs January 1995 - August 2002 0.00 10.00 20.00 30.00 40.00 50.00 60.00 70.00 80.00 IBM Oracle Hewlett-Packard Intel Merck General Electric Citigroup Company $ Billions Buybacks Profits Table A STOCK BUYBACKS U.S. Market Comment 12 September 2002 S&P S&P Reported Standard Operating Standard NIPA Standard Period Earnings Deviation Earnings Deviation Earnings Deviation 1900-2002 7.0 25.9 ------9.1 26.7 1900-2000 7.5 25.5 ------9.2 26.9 1986-2002 6.6 22.1 7.4 15.8 4.6 11.9 1986-2000 9.9 17.5 9.6 15.0 4.6 12.7 Earnings Growth Averages and Standard Deviations (%) -100.00 -50.00 0.00 50.00 100.00 150.00 1900 1910 1920 1930 1940 1950 1960 1970 1980 1990 2000 Year-over-Year Percent Change (%) S&P Reported Earnings S&P Operating Earnings NIPA Earnings 1991: NIPA (2%); Operating (-8%); Reported (-25%) 2001: NIPA (3%); Operating (-20%); Reported (-51%) Table B S&P AND NIPA EARNINGS GROWTH As of September 30, 2002 Sources: Federal Reserve, NIPA, Standard & Poor's, Standard & Poor's Compustat, Thomson Financial, and The Wall Street Journal. Notes: S&P Reported Earnings represents Robert Shiller's data from 1900 through 1926. NIPA earnings data are estimated for the most recent quarter ending September 30, 2002. U.S. Market Comment 13 September 2002 28.5 (P) 46.2 (P) 10.0 15.0 20.0 25.0 30.0 35.0 40.0 45.0 50.0 55.0 60.0 1985 1986 1987 1988 1989 1990 1991 1992 1993 1994 1995 1996 1997 1998 1999 2000 2001 2002 $ Earnings Per Share S&P Reported Earnings S&P Operating Earnings Table C S&P 500 EARNINGS: REPORTED AND OPERATING EARNINGS December 31, 1985 - September 30, 2002 Sources: Standard & Poor's, Standard & Poor's Compustat, Thomson Financial, and The Wall Street Journal. Note: (P) Preliminary. U.S. Market Comment 14 September 2002 NIPA earnings data are estimated for September 30, 2002. S&P 500 Price-to-GAAP Earnings 28.6 14.6 20.7 8.6 0.0 10.0 20.0 30.0 40.0 50.0 60.0 1900 1920 1940 1960 1980 2000 S&P 500 12 month Forward Price-to-Operating Earnings 15.4 13.5 18.7 8.3 0.0 5.0 10.0 15.0 20.0 25.0 30.0 1978 1982 1986 1990 1994 1998 2002 S&P 500 Price-to-Operating Earnings 17.6 18.5 23.3 13.7 0.0 5.0 10.0 15.0 20.0 25.0 30.0 35.0 198519871989199119931995199719992001 Table D PRICE-TO-EARNINGS RATIOS UNDER VARIOUS EARNINGS DEFINITIONS Mean One Standard Deviation Price-to-NIPA Earnings For the Corporate Non-financial Sector 19.8 Dec 31, 1999 36.2 14.3 21.7 7.0 0.0 10.0 20.0 30.0 40.0 50.0 60.0 1900 1920 1940 1960 1980 2000 January 1, 1900 - September 30, 2002 July 1, 1985 - September 30, 2002 July 1, 1978 - September 30, 2002 January 1, 1900 - September 30, 2002 U.S. Market Comment 15 September 2002 S&P 500 Price-to-Normalized GAAP Earnings (Five-Year Average) 20.9 15.8 21.9 9.7 0.0 5.0 10.0 15.0 20.0 25.0 30.0 35.0 40.0 45.0 1905 1917 1929 1941 1953 1965 1977 1989 2001 Table D (continued) PRICE-TO-EARNINGS RATIOS UNDER VARIOUS EARNINGS DEFINITIONS Sources: Calculated from data provided by NIPA, Federal Reserve, Robert Shiller's Data, Standard & Poor's, Standard & Poor's Compustat, Stephen Wright - Department of Economics Birkbeck College University of London, Thomson Financial, and The Wall Street Journal. Mean One Standard Deviation S&P 500 Price-to-Normalized GAAP Earnings (Trendline) 18.1 16.9 23.5 10.3 0.0 5.0 10.0 15.0 20.0 25.0 30.0 35.0 40.0 45.0 1960 1964 1968 1972 1976 1980 1984 1988 1992 1996 2000 January 1, 1960 - September 30, 2002 January 1, 1900 - September 30, 2002 U.S. Market Comment 16 September 2002 Equity Risk Premium Valuations Under Various Earnings Growth Assumptions for Next Ten Years 1% 3% 5% 7% 9% 11% 13% 15% 1% 1,734 2,092 2,517 3,020 3,614 4,313 5,133 6,094 (53%) (61%) (68%) (73%) (77%) (81%) (84%) (87%) 2% 804 961 1,147 1,366 1,624 1,927 2,282 2,696 1% (15%) (29%) (40%) (50%) (58%) (64%) (70%) 3% 530 629 744 881 1,041 1,228 1,447 1,703 54% 30% 10% (7%) (22%) (34%) (44%) (52%) 4% 399 469 552 649 763 895 1,050 1,230 104% 74% 48% 26% 7% (9%) (22%) (34%) Equity Risk Premium Valuations Under Various Earnings Growth Assumptions for Next Ten Years 1% 3% 5% 7% 9% 11% 13% 15% 1% 1,306 1,576 1,896 2,275 2,722 3,248 3,866 4,590 (38%) (48%) (57%) (64%) (70%) (75%) (79%) (82%) 2% 605 724 864 1,029 1,223 1,451 1,719 2,031 35% 13% (6%) (21%) (33%) (44%) (53%) (60%) 3% 399 473 561 663 784 925 1,090 1,282 104% 72% 45% 23% 4% (12%) (25%) (36%) 4% 301 354 416 489 574 674 791 926 171% 131% 96% 67% 42% 21% 3% (12%) Table E S&P 500 DIVIDEND DISCOUNT MODEL VALUATIONS UNDER VARYING ASSUMPTIONS S&P 500 Fair Value and Percentage Over- (Under-) Under Varying Equity Risk Premium, Earnings, and Earnings Growth Rate Assumptions Other Key Assumptions · Long-Term Earnings Growth of 5.0% · Risk-Free Rate of 4.83%, the yield on the 30-year Treasury on September 30, 2002 Sources: Standard & Poor's, Standard & Poor's Compustat, Thomson Financial, and Thomson Datastream. The 30-year Treasury yield is an extrapolation of the Long-Term Average Rate series calculated by the Treasury following 2/18/02, when the Treasury ceased publication of the 30-year constant maturity series. 178m Valuations Using 12-Month Trailing Operating Earnings of $46 Valuations Using 12-Month Trailing Reported Earnings of $29
189321
https://topp.openproblem.net/p21
TOPP: Problem 21: Shortest Paths among Obstacles in 2D The Open Problems Project Next:Problem 22: Minimum-Link Path in 2D Previous:Problem 20: Minimum Stabbing Spanning Tree Problem 21: Shortest Paths among Obstacles in 2D Statement Can shortest paths among h h h obstacles in the plane, with a total of n n n vertices, be found in optimal O(n+h log⁡h)O(n + h \log h)O(n+h lo g h) time using O(n)O(n)O(n) space? Origin Uncertain, pending investigation. Status/Conjectures Solved by Haitao Wang[Wan23], who gives an optimal O(n+h log⁡h)O(n + h \log h)O(n+h lo g h) time and O(n)O(n)O(n) space algorithm. Partial and Related Results The first algorithm linear in n n n in time and space was quadratic in h h h[KMM97], after which Hershberger and Suri[HS99] gave an O(n log⁡n)O(n\log n)O(n lo g n) time algorithm using O(n log⁡n)O(n\log n)O(n lo g n) space. Inkulu et al.[IKM10] later found an O(n+h log⁡h log⁡n)O(n + h\log h \log n)O(n+h lo g h lo g n) time and O(n)O(n)O(n) space algorithm. By modifying Hershberger and Suri’s algorithm, Wang[Wan21] reduced the space to O(n)O(n)O(n) while the running time remained O(n log⁡n)O(n\log n)O(n lo g n). Wang then closed the Ω(n+h log⁡h)\Omega(n + h \log h)Ω(n+h lo g h) lower bound by presenting the first O(n+h log⁡h)O(n + h \log h)O(n+h lo g h) time and O(n)O(n)O(n) space algorithm[Wan23]. In three dimensions, the Euclidean shortest path problem among general obstacles is NP-hard, but its complexity remains open for some special cases, such as when the obstacles are disjoint unit spheres or axis-aligned boxes; see [Mit00] for a survey. Appearances [MO01] Categories shortest paths Entry Revision History J. O’Rourke, 2 Aug. 2001; J. Mallen, 19 June 2025. Bibliography [HS99] John Hershberger and Subhash Suri. . SIAM J. Comput., 28(6):2215–2256, 1999. [IKM10] Rajasekhar Inkulu, Sanjiv Kapoor, and S.N. Maheshwari. A near optimal algorithm for finding euclidean shortest path in polygonal domain. arXiv:1011.6481 [cs.CG], 2010. [KMM97] S.Kapoor, S.N. Maheshwari, and Joseph S.B. Mitchell. . Discrete Comput. Geom., 18:377–383, 1997. [Mit00] Joseph S.B. Mitchell. Geometric shortest paths and network optimization. In Jörg-Rüdiger Sack and Jorge Urrutia, editors, Handbook of Computational Geometry, pages 633–701. Elsevier Publishers B.V. North-Holland, Amsterdam, 2000. [MO01] J.S.B. Mitchell and Joseph O’Rourke. Computational geometry column 42. Internat. J. Comput. Geom. Appl., 11(5):573–582, 2001. Also in SIGACT News 32(3):63-72 (2001), Issue 120. [Wan23] Haitao Wang. A new algorithm for euclidean shortest paths in the plane. J. ACM, 70(2), March 2023. [Wan21] Haitao Wang. Shortest paths among obstacles in the plane revisited. In Proceedings of the Thirty-Second Annual ACM-SIAM Symposium on Discrete Algorithms, SODA ’21, pages 810–821, USA, 2021. Society for Industrial and Applied Mathematics.
189322
https://emedicine.medscape.com/article/2062452-clinical
For You News & Perspective Tools & Reference Edition English Medscape Editions About You Professional Information Newsletters & Alerts Your Watch List Formulary Plan Manager Log Out Log In EN Medscape Editions English Deutsch Español Français Português UK X Univadis from Medscape About You Professional Information Newsletters & Alerts Your Watch List Formulary Plan Manager Log Out Register Log In close Please confirm that you would like to log out of Medscape. If you log out, you will be required to enter your username and password the next time you visit. Log out Cancel processing.... Tools & Reference>Thoracic Surgery Aortic Dissection Clinical Presentation Updated: Sep 03, 2024 Author: Mary C Mancini, MD, PhD, MMM; Chief Editor: John Geibel, MD, MSc, DSc, AGAF more...;) Share Print Feedback Close Facebook Twitter LinkedIn WhatsApp Email Sections Aortic Dissection Sections Aortic Dissection Overview Practice Essentials Background Anatomy Pathophysiology Etiology Epidemiology Prognosis Show All Presentation History Physical Examination Complications Show All DDx Workup Approach Considerations Blood Studies Smooth-Muscle Myosin Heavy-Chain Assay Chest Radiography Computed Tomography Echocardiography Magnetic Resonance Imaging Aortography Electrocardiography Show All Treatment Approach Considerations Pharmacologic Therapy Surgical Overview Repair of Type A Dissections Repair of Type B Dissections Endovascular Repair Management of Intramural Hematomas and Penetrating Ulcers Consultations Long-Term Monitoring Show All Guidelines Medication Medication Summary Antihypertensives, Other Analgesics Show All Media Gallery;) References;) Presentation History Patients with acute aortic dissection typically present with the sudden onset of severe chest pain, although this description is not universal. Some patients present with only mild pain, often mistaken for a symptom of musculoskeletal conditions in the thorax, groin, or back. Consider thoracic aortic dissection in the differential diagnosis of all patients presenting with chest pain. The location of the pain may indicate where the dissection arises. Anterior chest pain and chest pain that mimics acute myocardial infarction usually are associated with anterior arch or aortic root dissection. This is caused by the dissection interrupting flow to the coronary arteries, resulting in myocardial ischemia. Pain in the neck or jaw indicates that the dissection involves the aortic arch and extends into the great vessels. Tearing or ripping pain in the intrascapular area may indicate that the dissection involves the descending aorta. The pain typically changes as the dissection evolves. The pain of aortic dissection is typically distinguished from the pain of acute myocardial infarction by its abrupt onset and maximal severity at onset, though the presentations of the two conditions overlap to some degree and are easily confused. Aortic dissection can be presumed in patients with symptoms and signs suggestive of myocardial infarction but without classic electrocardiographic (ECG) findings. Aortic dissection is painless in about 10% of patients. Painless dissection is more common in those with neurologic complications from the dissection and those with Marfan syndrome. Neurologic deficits are a presenting sign in as many as 20% of cases. Syncope is part of the early course of aortic dissection in approximately 5% of patients and may be the result of increased vagal tone, hypovolemia, or dysrhythmia. Cerebrovascular accident (CVA) symptoms include hemianesthesia and hemiparesis or hemiplegia. Altered mental status is also reported. Patients with peripheral nerve ischemia can present with numbness and tingling, pain, or weakness in the extremities. Horner syndrome is caused by interruption in the cervical sympathetic ganglia and manifests as ptosis, miosis, and anhidrosis. Hoarseness from recurrent laryngeal nerve compression has also been described. Cardiovascular manifestations involve symptoms suggestive of congestive heart failure secondary to acute severe aortic regurgitation. These include dyspnea and orthopnea. Respiratory symptoms can include dyspnea and hemoptysis if dissection ruptures into the pleura or if tracheal or bronchial obstruction has occurred. Physical findings of a hemothorax may be found if the dissection ruptures into the pleura. Other manifestations include the following: Dysphagia from compression of the esophagus Flank pain if the renal artery is involved Abdominal pain if the dissection involves the abdominal aorta Fever Anxiety and premonitions of death A retrospective chart review of 83 patients with a thoracic aortic dissection revealed that only 40% of alert patients were asked the basic questions about their pain. Remember to cover the P, Q, R, S, and T (position, quality, radiation, severity, and timing) of pain in all able patients. Timing includes the rate of onset, duration, and frequency of episodes. Also ask about migration of pain, aggravating or alleviating factors, and associated symptoms. Next: Physical Examination Hypertension may result from a catecholamine surge or underlying essential hypertension. Hypotension is an ominous finding and may be the result of excessive vagal tone, cardiac tamponade, or hypovolemia from rupture of the dissection. An interarm blood pressure differential greater than 20 mm Hg should increase the suspicion of aortic dissection, but it does not rule it in. Significant interarm blood pressure differentials may be found in 20% of people without aortic dissection. Signs of aortic regurgitation include bounding pulses, wide pulse pressure, and diastolic murmurs. Acute, severe aortic regurgitation may result in signs suggestive of congestive heart failure : dyspnea, orthopnea, bibasilar crackles, or elevated jugular venous pressure. Other cardiovascular manifestations include findings suggestive of cardiac tamponade (eg, muffled heart sounds, hypotension, pulsus paradoxus, jugular venous distention, Kussmaul sign). Tamponade must be recognized promptly. Superior vena cava syndrome can result from compression of the superior vena cava from a large, distorted aorta. Wide pulse pressure and pulse deficit or asymmetry of peripheral pulses are reported. Patients with right coronary artery ostial dissection may present with acute myocardial infarction, commonly inferior myocardial infarction. Pericardial friction rub may occur secondary to pericarditis. Neurologic deficits are a presenting sign in up to 20% of cases. The most common neurologic findings are syncope and altered mental status. Syncope is part of the early course of aortic dissection in about 5% of patients and may be the result of increased vagal tone, hypovolemia, or dysrhythmia. Other causes of syncope or altered mental status include strokes from compromised blood flow to the brain or spinal cord and ischemia from interruption of blood flow to the spinal arteries. Peripheral nerve ischemia can manifest as numbness and tingling in the extremities. Hoarseness from recurrent laryngeal nerve compression also has been described. Horner syndrome is caused by interruption in the cervical sympathetic ganglia and presents with ptosis, miosis, and anhidrosis. Other diagnostic clues include a new diastolic murmur or asymmetrical pulses. Pay careful attention to carotid, brachial, and femoral pulses on initial examination and look for progression of bruits or development of bruits on reexamination. Physical findings of a hemothorax may be found if the dissection ruptures into the pleura. Previous Next: Complications Complications are diverse and numerous; anatomic-related complications are deducible and include the following: Hypotension and shock as a result of aortic rupture, with eventual death from exsanguination Pericardial tamponade secondary to hemopericardium; this complicates type A aortic dissection Acute aortic regurgitation as a complication of proximal aortic dissection propagating into a sinus of Valsalva with resultant aortic valve insufficiency Pulmonary edema secondary to acute aortic valve regurgitation Rare occurrence of right or left coronary ostium involvement leading to myocardial ischemia Neurologic findings due to carotid artery obstruction - Ischemic CVA, hemiplegia, hemianesthesia (aortic branch involvement can lead to spinal cord ischemia, ischemic paraparesis, and paraplegia) Mesenteric and renal ischemia - Can lead to bowel or visceral ischemia, renal infarction, hematuria, or acute renal failure (ARF) Compressive symptoms, such as superior vena cava syndrome, Horner syndrome (when it affects the superior cervical ganglia), dysphagia (when it involves the esophagus), airway compromise, and hemoptysis (when it compresses the bronchus) Other compressive symptoms - Can be associated with vocal cord paralysis and hoarseness Claudication - Can develop from extension of the dissection into the iliac arteries Redissection and progressive aortic diameter enlargement Aneurysmal dilatation and saccular aneurysm Previous Differential Diagnoses References Spittell PC, Spittell JA Jr, Joyce JW, Tajik AJ, Edwards WD, Schaff HV, et al. Clinical features and differential diagnosis of aortic dissection: experience with 236 cases (1980 through 1990). Mayo Clin Proc. 1993 Jul. 68 (7):642-51. [QxMD MEDLINE Link]. Hagiwara A, Shimbo T, Kimira A, Sasaki R, Kobayashi K, Sato T. Using fibrin degradation products level to facilitate diagnostic evaluation of potential acute aortic dissection. J Thromb Thrombolysis. 2013 Jan. 35 (1):15-22. [QxMD MEDLINE Link]. [Guideline] Kicska GA, Hurwitz Koweek LM, Ghoshhajra, BB, Beache GM, Brown RKJ, et al. ACR Appropriateness Criteria® -- suspected acute aortic syndrome. American College of Radiology. Available at 2021; Accessed: September 3, 2024. Meredith EL, Masani ND. Echocardiography in the emergency assessment of acute aortic syndromes. Eur J Echocardiogr. 2009 Jan. 10 (1):i31-9. [QxMD MEDLINE Link]. Nienaber CA, Kische S, Rousseau H, Eggebrecht H, Rehders TC, Kundt G, et al. Endovascular repair of type B aortic dissection: long-term results of the randomized investigation of stent grafts in aortic dissection trial. Circ Cardiovasc Interv. 2013 Aug. 6 (4):407-16. [QxMD MEDLINE Link]. Shu C, Wang T, Li QM, Li M, Jiang XH, Luo MY, et al. Thoracic endovascular aortic repair for retrograde type A aortic dissection with an entry tear in the descending aorta. J Vasc Interv Radiol. 2012 Apr. 23 (4):453-60, 460.e1. [QxMD MEDLINE Link]. Hughes GC. Management of acute type B aortic dissection; ADSORB trial. J Thorac Cardiovasc Surg. 2015 Feb. 149 (2 Suppl):S158-62. [QxMD MEDLINE Link]. Schizas N, Nazou G, Samiotis I, Antonopoulos CN, Angouras DC. Is TEVAR an Effective Approach to Prevent Complications after Surgery for Aortic Dissection Type A? A Systematic Review. Healthcare (Basel). 2024 Jun 25. 12 (13):[QxMD MEDLINE Link].[Full Text]. Eidt JF, Gucwa AL, Cha E, Hohmann SE, Vasquez J Jr. Emerging Trends in the Care of Type B Aortic Dissections. Am J Cardiol. 2024 Aug 27. [QxMD MEDLINE Link]. Hagan PG, Nienaber CA, Isselbacher EM, Bruckman D, Karavite DJ, Russman PL, et al. The International Registry of Acute Aortic Dissection (IRAD): new insights into an old disease. JAMA. 2000 Feb 16. 283 (7):897-903. [QxMD MEDLINE Link]. Patel PD, Arora RR. Pathophysiology, diagnosis, and management of aortic dissection. Ther Adv Cardiovasc Dis. 2008 Dec. 2 (6):439-68. [QxMD MEDLINE Link]. Braverman AC, Schermerhorn M. Diseases of the aorta. Libby P, Bonow RO, Mann DL, Tomaselli GF, Bhatt DL, Solomon SD, eds.Braunwald's Heart Disease: A Textbook of Cardiovascular Medicine. 12th ed. Philadelphia: Elsevier; 2022. 806-36. Sutherland A, Escano J, Coon TP. D-dimer as the sole screening test for acute aortic dissection: a review of the literature. Ann Emerg Med. 2008 Oct. 52 (4):339-43. [QxMD MEDLINE Link]. von Kodolitsch Y, Nienaber CA, Dieckmann C, Schwartz AG, Hofmann T, Brekenfeld C, et al. Chest radiography for the diagnosis of acute aortic syndrome. Am J Med. 2004 Jan 15. 116 (2):73-7. [QxMD MEDLINE Link]. Altman LK. The man on the table devised the surgery. New York Times. Available at December 25, 2006; Accessed: September 3, 2024. Mimoun L, Detaint D, Hamroun D, Arnoult F, Delorme G, Gautier M, et al. Dissection in Marfan syndrome: the importance of the descending aorta. Eur Heart J. 2011 Feb. 32 (4):443-9. [QxMD MEDLINE Link]. Niino T, Hata M, Sezai A, Yoshitake I, Unosawa S, Shimura K, et al. Optimal clinical pathway for the patient with type B acute aortic dissection. Circ J. 2009 Feb. 73 (2):264-8. [QxMD MEDLINE Link]. Valente AM, Dorfman AL, Babu-Narayan SV, Krieger EV. Congenital heart disease in the adolescent and adult. Libby P, Bonow RO, Mann DL, Tomaselli GF, Bhatt DL, Solomon SD, eds.Braunwald's Heart Disease: A Textbook of Cardiovascular Medicine. 12th ed. Philadelphia: Elsevier; 2022. Chap 82. FDA warns about increased risk of ruptures or tears in the aorta blood vessel with fluoroquinolone antibiotics in certain patients. US Food and Drug Administration. Available at December 20, 2018; Accessed: September 3, 2024. Clouse WD, Hallett JW Jr, Schaff HV, Spittell PC, Rowland CM, Ilstrup DM, et al. Acute aortic dissection: population-based incidence compared with degenerative aortic aneurysm rupture. Mayo Clin Proc. 2004 Feb. 79 (2):176-80. [QxMD MEDLINE Link]. Suzuki T, Distante A, Zizza A, Trimarchi S, Villani M, Salerno Uriarte JA, et al. Diagnosis of acute aortic dissection by D-dimer: the International Registry of Acute Aortic Dissection Substudy on Biomarkers (IRAD-Bio) experience. Circulation. 2009 May 26. 119 (20):2702-7. [QxMD MEDLINE Link]. Cigarroa JE, Isselbacher EM, DeSanctis RW, Eagle KA. Diagnostic imaging in the evaluation of suspected aortic dissection. Old standards and new directions. N Engl J Med. 1993 Jan 7. 328 (1):35-43. [QxMD MEDLINE Link]. Czerny M, Krähenbühl E, Reineke D, Sodeck G, Englberger L, Weber A, et al. Mortality and neurologic injury after surgical repair with hypothermic circulatory arrest in acute and chronic proximal thoracic aortic pathology: effect of age on outcome. Circulation. 2011 Sep 27. 124 (13):1407-13. [QxMD MEDLINE Link]. Erbel R, Engberding R, Daniel W, Roelandt J, Visser C, Rennollet H. Echocardiography in diagnosis of aortic dissection. Lancet. 1989 Mar 4. 1 (8636):457-61. [QxMD MEDLINE Link]. Adachi H, Kyo S, Takamoto S, Kimura S, Yokote Y, Omoto R. Early diagnosis and surgical intervention of acute aortic dissection by transesophageal color flow mapping. Circulation. 1990 Nov. 82 (5 Suppl):IV19-23. [QxMD MEDLINE Link]. Nienaber CA, von Kodolitsch Y, Nicolas V, Siglow V, Piepho A, Brockhoff C, et al. The diagnosis of thoracic aortic dissection by noninvasive imaging procedures. N Engl J Med. 1993 Jan 7. 328 (1):1-9. [QxMD MEDLINE Link]. Nienaber CA, Spielmann RP, von Kodolitsch Y, Siglow V, Piepho A, Jaup T, et al. Diagnosis of thoracic aortic dissection. Magnetic resonance imaging versus transesophageal echocardiography. Circulation. 1992 Feb. 85 (2):434-47. [QxMD MEDLINE Link]. Dake MD, Kato N, Mitchell RS, Semba CP, Razavi MK, Shimono T, et al. Endovascular stent-graft placement for the treatment of acute aortic dissection. N Engl J Med. 1999 May 20. 340 (20):1546-52. [QxMD MEDLINE Link]. Glower DD, Fann JI, Speier RH, Morrison L, White WD, Smith LR, et al. Comparison of medical and surgical therapy for uncomplicated descending aortic dissection. Circulation. 1990 Nov. 82 (5 Suppl):IV39-46. [QxMD MEDLINE Link]. Kato M, Bai H, Sato K, Kawamoto S, Kaneko M, Ueda T, et al. Determining surgical indications for acute type B dissection based on enlargement of aortic diameter during the chronic phase. Circulation. 1995 Nov 1. 92 (9 Suppl):II107-12. [QxMD MEDLINE Link]. Nienaber CA, Fattori R, Lund G, Dieckmann C, Wolf W, von Kodolitsch Y, et al. Nonsurgical reconstruction of thoracic aortic dissection by stent-graft placement. N Engl J Med. 1999 May 20. 340 (20):1539-45. [QxMD MEDLINE Link]. [Guideline] MacGillivray TE, Gleason TG, Patel HJ, Aldea GS, Bavaria JE, Beaver TM, et al. The Society of Thoracic Surgeons/American Association for Thoracic Surgery Clinical Practice Guidelines on the Management of Type B Aortic Dissection. Ann Thorac Surg. 2022 Apr. 113 (4):1073-1092. [QxMD MEDLINE Link].[Full Text]. Fann JI, Smith JA, Miller DC, Mitchell RS, Moore KA, Grunkemeier G, et al. Surgical management of aortic dissection during a 30-year period. Circulation. 1995 Nov 1. 92 (9 Suppl):II113-21. [QxMD MEDLINE Link]. Kasirajan K, Kwolek CJ, Gupta N, Fairman RM. Incidence of and outcomes after misaligned deployment of the talent thoracic stent graft system. J Vasc Surg. 2010 May. 51 (5):1096-101. [QxMD MEDLINE Link]. Chang G, Wang H, Chen W, Yao C, Li Z, Wang S. Endovascular repair of a type B aortic dissection with a ventricular septal defect occluder. J Vasc Surg. 2010 Jun. 51 (6):1507-9. [QxMD MEDLINE Link]. Elefteriades JA, Farkas EA. Thoracic aortic aneurysm clinically pertinent controversies and uncertainties. J Am Coll Cardiol. 2010 Mar 2. 55 (9):841-57. [QxMD MEDLINE Link]. Parsa CJ, Schroder JN, Daneshmand MA, McCann RL, Hughes GC. Midterm results for endovascular repair of complicated acute and chronic type B aortic dissection. Ann Thorac Surg. 2010 Jan. 89 (1):97-102; discussion 102-4. [QxMD MEDLINE Link]. Berland TL, Cayne NS, Veith FJ. Access complications during endovascular aortic repair. J Cardiovasc Surg (Torino). 2010 Feb. 51 (1):43-52. [QxMD MEDLINE Link]. O'Donnell S, Geotchues A, Beavers F, Akbari C, Lowery R, Elmassry S, et al. Endovascular management of acute aortic dissections. J Vasc Surg. 2011 Nov. 54 (5):1283-9. [QxMD MEDLINE Link]. Kaya A, Heijmen RH, Rousseau H, Nienaber CA, Ehrlich M, Amabile P, et al. Emergency treatment of the thoracic aorta: results in 113 consecutive acute patients (the Talent Thoracic Retrospective Registry). Eur J Cardiothorac Surg. 2009 Feb. 35 (2):276-81. [QxMD MEDLINE Link]. Kouchoukos NT, Dougenis D. Surgery of the thoracic aorta. N Engl J Med. 1997 Jun 26. 336 (26):1876-88. [QxMD MEDLINE Link]. Media Gallery Aortic dissection. CT scan showing a flap (right side of image). Aortic dissection. True lumen versus false lumen in an intimal flap. Aortic dissection. Left subsegmental atelectasis and left pleural effusion. Flap at lower right of image. Aortic dissection. Significant left pleural effusion. Aortic dissection. CT scan showing a flap (center of image). Aortic dissection. CT scan showing a flap (center of image). Aortic dissection. CT scan showing a flap. Aortic dissection. CT scan showing a flap. Aortic dissection. Mediastinal widening. Aortic dissection. CT scan showing a flap. Aortic dissection. CT scan showing a flap. Aortic dissection. CT scan showing a flap. Aortic dissection. Thrombus and a patent lumen. Aortic dissection. Thrombus. Aortic dissection. True lumen and false lumen separated by an intimal flap. Aortic dissection. Mediastinal widening. Aortic dissection. CT scan showing a flap. Aortic dissection. Intimal flap and left pleural effusion. Image A represents a Stanford A or a DeBakey type 1 dissection. Image B represents a Stanford A or DeBakey type II dissection. Image C represents a Stanford type B or a DeBakey type III dissection. Image D is classified in a manner similar to A but contains an additional entry tear in the descending thoracic aorta. Note that a primary arch dissection does not fit neatly into either classification. Aortic dissection. Chest radiograph of a patient with aortic dissection. Image courtesy of Dr. K. London, University of California at Davis Medical Center. Chest radiograph of a patient with aortic dissection presenting with hemothorax. Chest radiograph demonstrating widened mediastinum in a patient with aortic dissection. Angiogram demonstrating dissection of the aorta in a patient with aortic dissection presenting with hemothorax. Electrocardiogram of a patient presenting to the ED with chest pain; this patient was diagnosed with aortic dissection. Patient with an ascending type A aortic dissection showing the intimal flap. Image courtesy of Kaiser-Permanente. Patient with an ascending type A aortic dissection showing the intimal flap. Image courtesy of Kaiser-Permanente. Patient with an ascending type A aortic dissection showing the intimal flap. Image courtesy of Kaiser-Permanente. Patient with an ascending type A aortic dissection showing the intimal flap. Image courtesy of Kaiser-Permanente. Patient with a type A aortic dissection involving the ascending and descending aorta. Image courtesy of Kaiser-Permanente. Patient with a type A aortic dissection involving the ascending and descending aorta. Image courtesy of Kaiser-Permanente. Patient with a type A aortic dissection involving the ascending and descending aorta. Image courtesy of Kaiser-Permanente. Patient with a type A aortic dissection involving the ascending and descending aorta. Image courtesy of Kaiser-Permanente. Patient showing a type B aortic dissection with extravasation of blood into the pleural cavity. Image courtesy of Kaiser-Permanente. Patient showing a type B aortic dissection with extravasation of blood into the pleural cavity. Image courtesy of Kaiser-Permanente. Patient showing a type B aortic dissection with extravasation of blood into the pleural cavity. Image courtesy of Kaiser-Permanente. Patient showing a type B aortic dissection with extravasation of blood into the pleural cavity. Image courtesy of Kaiser-Permanente. of 37 Tables Back to List Contributor Information and Disclosures Author Mary C Mancini, MD, PhD, MMM Mary C Mancini, MD, PhD, MMM is a member of the following medical societies: American Association for Thoracic Surgery, American College of Surgeons, American Surgical Association, Phi Beta Kappa, Society of Thoracic SurgeonsDisclosure: Nothing to disclose. Specialty Editor Board Disclosure: Nothing to disclose. Chief Editor John Geibel, MD, MSc, DSc, AGAF Vice Chair and Professor, Department of Surgery, Section of Gastrointestinal Medicine, Professor, Department of Cellular and Molecular Physiology, Yale University School of Medicine; Director of Surgical Research, Department of Surgery, Yale-New Haven Hospital; American Gastroenterological Association Fellow; Fellow of the Royal Society of Medicine John Geibel, MD, MSc, DSc, AGAF is a member of the following medical societies: American Gastroenterological Association, American Physiological Society, American Society of Nephrology, Association for Academic Surgery, International Society of Nephrology, New York Academy of Sciences, Society for Surgery of the Alimentary TractDisclosure: Nothing to disclose. Acknowledgements Ali Hmidi, MD Staff Physician, Department of Internal Medicine, Brooklyn Hospital Center, Cornell University Disclosure: Nothing to disclose. Sateesh Kesari, MBBS, MD Fellow in Cardiovascular Medicine, New York Presbyterian Hospital/The Brooklyn Hospital Center Program, Weill Cornell Medical College of Cornell University Sateesh Kesari, MBBS, MD is a member of the following medical societies: American College of Cardiology, American Medical Assocation, American Society of Echocardiography, and American Society of Nuclear Cardiology Disclosure: Nothing to disclose. Oladayo Adisa Osinuga Sr, MBBS Attending Physician, Department of Internal Medicine, Atlanta Medical Center Oladayo Adisa Osinuga Sr, MBBS is a member of the following medical societies: American College of Physicians-American Society of Internal Medicine and American Medical Association Disclosure: Nothing to disclose. Ramachandra C Reddy, MD Associate Director, Assistant Professor, Department of Surgery, Division of Cardiothoracic Surgery, State University of New York-Downstate Medical Center Ramachandra C Reddy, MD is a member of the following medical societies: American College of Cardiology, American College of Chest Physicians, American Medical Association, American Society for Artificial Internal Organs, and Society of Thoracic Surgeons Disclosure: Nothing to disclose. Sarath Reddy, MD Associate Director of Cardiac Care Unit, Department of Cardiology, The Brooklyn Hospital Center, Weill Medical College of Cornell University Sarath Reddy, MD is a member of the following medical societies: American College of Cardiology Disclosure: Nothing to disclose. Benson B Roe, MD Emeritus Chief, Division of Cardiothoracic Surgery, Emeritus Professor, Department of Surgery, University of California at San Francisco Medical Center Benson B Roe, MD is a member of the following medical societies: Alpha Omega Alpha, American Association for Thoracic Surgery, American College of Cardiology, American College of Surgeons, American Heart Association, American Medical Association, American Society for Artificial Internal Organs, American Surgical Association, California Medical Association, Society for Vascular Surgery, Society of Thoracic Surgeons, and Society of University Surgeons Disclosure: Nothing to disclose. Vincent Lopez Rowe, MD Associate Professor of Surgery, Department of Surgery, Division of Vascular Surgery, University of Southern California Medical Center Vincent Lopez Rowe, MD is a member of the following medical societies: American College of Surgeons, American Heart Association, Pacific Coast Surgical Association, Peripheral Vascular Surgery Society, Society for Clinical Vascular Surgery, Society for Vascular Surgery, and Western Vascular Surgical Society Disclosure: Nothing to disclose. Francisco Talavera, PharmD, PhD Adjunct Assistant Professor, University of Nebraska Medical Center College of Pharmacy; Editor-in-Chief, Medscape Drug Reference Disclosure: Medscape Salary Employment Close;) What would you like to print? What would you like to print? Print this section Print the entire contents of Print the entire contents of article Sections Aortic Dissection Overview Practice Essentials Background Anatomy Pathophysiology Etiology Epidemiology Prognosis Show All Presentation History Physical Examination Complications Show All DDx Workup Approach Considerations Blood Studies Smooth-Muscle Myosin Heavy-Chain Assay Chest Radiography Computed Tomography Echocardiography Magnetic Resonance Imaging Aortography Electrocardiography Show All Treatment Approach Considerations Pharmacologic Therapy Surgical Overview Repair of Type A Dissections Repair of Type B Dissections Endovascular Repair Management of Intramural Hematomas and Penetrating Ulcers Consultations Long-Term Monitoring Show All Guidelines Medication Medication Summary Antihypertensives, Other Analgesics Show All Media Gallery;) References;) encoded search term (Aortic Dissection) and Aortic Dissection What to Read Next on Medscape Related Conditions and Diseases Aortic Dissection Acute Aortic Dissection Aortic Dissection Imaging Dull Chest Pain in a 42-Year-Old Man Fast Five Quiz: Aortic Aneurysm and Dissection Thoracic Aortic Aneurysm Imaging Genetics of Marfan Syndrome News & Perspective MI Risk Higher After Stroke but Not Cervical Dissection Aspirin Tied to Slower Growth of Abdominal Aortic Aneurysm Bloating or Aortic Tear? A 68-Year-Old€™s Shocking Diagnosis The Cardiovascular Care of the Pediatric Athlete Feb 21, 2025 This Week in Cardiology Podcast Natriuretic Peptide Risk-Stratifies Acute Type-A Aortic Dissection Drug Interaction Checker Pill Identifier Calculators Formulary Aortic Dissection: A Double-Barreled Threat 2002 2062452-overview Procedures Procedures Aortic Dissection 2003 /viewarticle/there-link-between-fluoroquinolones-and-aortic-aneurysm-or-2023a1000sxe education Is There a Link Between Fluoroquinolones and Aortic Aneurysm or Dissection? 0.25 LOC / CME / MOC Credit / CE Credits education You are being redirected to Medscape Education Yes, take me there 0.25 LOC / CME / MOC Credit / CE Is There a Link Between Fluoroquinolones and Aortic Aneurysm or Dissection? 2002 416776-overview Diseases & Conditions Diseases & Conditions Aortic Dissection Imaging 2002 756835-overview Diseases & Conditions Diseases & Conditions Acute Aortic Dissection
189323
https://www.nature.com/articles/s41598-022-18654-2
Thank you for visiting nature.com. You are using a browser version with limited support for CSS. To obtain the best experience, we recommend you use a more up to date browser (or turn off compatibility mode in Internet Explorer). In the meantime, to ensure continued support, we are displaying the site without styles and JavaScript. View all journals Search Log in Explore content About the journal Publish with us Sign up for alerts RSS feed Selective serotonin reuptake inhibitors increase risk of upper gastrointestinal bleeding when used with NSAIDs: a systemic review and meta-analysis Download PDF Download PDF Article Open access Published: Selective serotonin reuptake inhibitors increase risk of upper gastrointestinal bleeding when used with NSAIDs: a systemic review and meta-analysis Syed Mobashshir Alam1, Mohammed Qasswal1, Muhammad Junaid Ahsan1, Ryan W. Walters2 & €¦ Subhash Chandra2 Scientific Reports volume 12, Article number: 14452 (2022) Cite this article 19k Accesses 22 Citations 84 Altmetric Metrics details Subjects Gastroenterology Gastrointestinal bleeding Upper gastrointestinal bleeding Abstract The use of selective serotonin reuptake inhibitors (SSRIs) can increase the risk of gastrointestinal (GI) bleeding. Similarly, it is well known that the use of NSAIDs predisposes patients to upper GI bleeding. The aim of this study was to explore if the addition of SSRIs in patients already taking NSAIDs significantly increases their risk for upper GI bleeding. An electronic literature search was conducted using the SCOPUS and MEDLINE databases from inception through September 2020. Cohort and case€“control trials that reported patients with upper GI bleeding on NSAIDs with SSRIs, compared to controls on NSAIDs only were included. Newcastle€“Ottawa checklist was used to ensure inclusion of high-quality studies. Data was extracted by the lead investigator and cross-checked by the second author. Dichotomous data was pooled to obtain an odds ratio (OR) of the risk of upper GI bleeding in patients on NSAIDs with concomitant SSRI use. The primary endpoint of the study was the risk of upper GI bleeding with SSRIs and NSAIDs compared to NSAIDs alone. A total of 366 citations were reviewed, and 21 were selected for full-text evaluation. 1 cohort and 9 case€“control studies were eligible. There was an additional increased risk of upper GI bleeding in patients on NSAIDs with concomitant SSRI use (OR 1.75, 95% CI€‰=€‰1.32€“2.33). In patients already on NSAID therapy, the concomitant use of SSRIs can significantly increase the risk of upper of GI bleeding. Similar content being viewed by others A multicenter case€“control study of the effect of e-nos VNTR polymorphism on upper gastrointestinal hemorrhage in NSAID users Article Open access 07 October 2021 Neural network predicts need for red blood cell transfusion for patients with acute gastrointestinal bleeding admitted to the intensive care unit Article Open access 23 April 2021 Gastrointestinal bleeding in elderly patients with atrial fibrillation: prespecified All Nippon Atrial Fibrillation in the Elderly (ANAFIE) Registry subgroup analysis Article Open access 27 April 2024 Introduction Upper Gastrointestinal bleeding (UGIB) is classified based on if the site of bleeding is above the Ligament of Trietz1. Helicobacter pylori infection and nonsteroidal anti-inflammatory drugs (NSAIDs) are two main etiologies of UGIB. NSAID therapy causes UGIB via the inhibition of cyclooxygenase 1 & 2 (COX-1 & COX-2) affecting mucosal health and ulcer formation29 (2018)."). Another class of medications, Selective Serotonin Reuptake Inhibitors (SSRIs), use has been attributed in recent studies to be a risk factor of GI bleeding due to its effect on platelet aggregation and subsequent impaired hemostasis3, 99€“103. (2012)."),4."). SSRI use has become the first-line treatment in many psychiatric disorders, and the number of patients on additional NSAID therapy has increased. The most recent systemic review and meta-analysis involving 15 case€“control and 4 cohort studies explored the role of SSRIs in UGIB52 (2014)."). There was modest increase in risk of UGIB in low-risk patients. Concomitant use of NSAIDs with SSRIs increased risk of UGIB substantially, OR of 4.25 (95% CI 2.82€“6.42). Since the publication of this study in 2014, there have been additional better-quality studies exploring the association of SSRIs and upper GI bleeding. Many studies had a subgroup of patients who were on NSAID therapy prior to the initiation of SSRI use, and subsequently developed an upper GI bleed. While the previous systemic review did look at both SSRI alone and SSRI with concomitant medications (of various classes) on GI bleeds, it did not analyze the effect on patients already at high risk for GI bleed such as NSAID users. The aim of our systemic review and meta-analysis was to explore the effect of adding an SSRI medication on upper GI bleed risk in individuals already on NSAID therapy. Methods Search strategy and study selection We conducted this systematic review and meta-analysis following the reporting guidelines of Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA). We searched MEDLINE, Scopus, and all the evidence-based medicine reviews that included the Cochrane Database of Systemic reviews from inception through August 31, 2020. Our studies were limited to studies written in English and no other restrictions were applied to the search. Three investigators (S.M.A., M.Q, M.J.A.) identified the selected paper independently by screening the titles and abstracts. Meta-analyses, case reports, and reviews were excluded. Full reports were then obtained for the potentially eligible studies. Same investigators reviewed the full manuscripts to determine final eligibility. Any disagreements in study eligibility were resolved on consensus amongst the investigators. Definitions and endpoints The primary study outcome was the odds ratio of having a GI bleed while on concomitant SSRI and NSAIDs compared to NSAIDs alone. Subjects were considered to have an upper GI bleed if there were any symptoms of hematemesis, coffee-ground emesis, melena, hematochezia, or a verified bleed on endoscopy or colonoscopy. Patients needed to be on this combination for at least 1 week, had to have not had any GI bleeding prior to starting SSRIs and NSAIDs, and have no other precipitating factors to increase the likelihood of GI bleeding. Multiple studies that were included in the final analysis reported on SSRIs and NSAIDs as a class rather than individual medications. Therefore, the investigators collected data based on if a patient was prescribed a certain drug class (NSAID or SSRI) rather than an individual drug. To minimize bias, we used the Newcastle€“Ottawa scale to objectively select high quality studies6. Quality assessment Two investigators (S.M.A., M.J.A) used the Newcastle€“Ottawa quality assessment scale for case€“control studies (NOS) to assess the quality of each selected study. A quality score was calculated for each study based on selection of the groups included in each study, comparability, and assessment of the outcome and exposure. Disagreements with study selection was resolved with consensus with the presence of the fourth investigator (S.C). Data extraction Two reviewers (S.M.A., M.Q.) extracted the data from the selected articles independently using a standardized extraction sheet using Microsoft Excel. We included study characteristics (author, year, country, number of patients, and study design), age, gender, BMI, presence of cirrhosis, anti-coagulants use, P2Y12 inhibitor use, NSAIDs use alone, NSAID use with SSRI, and the use of SSRI alone. Data analysis All study data is presented as prevalence of GI bleed compared between patients on NSAIDs alone and concomitant SSRI and NSAID using the log-odds ratio. For reporting, we present odds ratios. Between-study heterogeneity was quantified for all studies via I- and tau-squared and tested empirically using Cochran's Q test. For all meta-analyses, the random-effects approach using residual maximum likelihood (REML) estimation was used. Publication bias is shown via forest plots and tested via Egger's test. We had originally planned to conduct a series of meta-regressions for concomitant anticoagulants, steroid, or proton pump inhibitor (PPI) use; however, we were unable to quantify concomitant use for patients meeting study inclusion criteria. All analyses were conducted using the meta package within Stata v. 17.0 with p€‰<€‰0.05 used it indicate statistical significance. Results Search The systematic review initially identified 366 potential records, of which 17 were removed as duplicates. A total of 21 studies were selected for thorough review by our primary investigators (S.M.A, M.Q), of which 11 were excluded. Thus, a total of 10 studies were included for analysis (See Fig. 1 PRISMA diagram). Study characteristics and quality assessment The 10 studies included in the systematic review had a total of 66,419 patients from 7 different countries. Characteristics of included studies are summarized in Table 1. Studies included in data synthesis ranged widely on average age (50€“70 s when reported), gender (35€“75% male between studies) proportions, anticoagulant use, steroid use, and PPI use. Quality assessment score of individual studies on subjection selection, exposure and comparability are summarized in Table 2. We used the Newcastle€“Ottawa scale to assess quality of the studies. This scale awards points for Selection, Comparability, and Exposure of systemic review studies. Four, two, and three points are possible for Selection, Comparability, and Exposure respectively, and our team decided that 6 points would be our cutoff for €œhigh-quality studies.€ Five studies scored 9, four studies scored 8 and one study scored 6. Studies overall did well in case and control selection, comparability, and exposure. UGIB with concomitant SSRI and NSAIDs use Overall, the odds of UGIB were 75% higher in the presence of concomitant SSRI compared to NSAIDs alone (odds ratio: 1.75, 95% CI: 1.32€“2.33, p€‰<€‰0.001; Fig. 2). In the combined estimate, statistically significant heterogeneity was observed (I2€‰=€‰85.7%, p€‰<€‰0.001). Subgroup analysis and meta-regression to explain heterogeneity across additional risk factors for upper GI bleeding (e.g., anticoagulant, steroid, and/or PPI use) could not be performed due to lack of available data for these risk factors. As expected in the presence of heterogeneity, moderate funnel plot asymmetry was observed (Fig. 3). However, publication bias was not statistically significant (p€‰=€‰0.172). Discussion The results of our systemic review show that the addition of SSRIs on patients already on NSAID therapy led to a significantly higher likelihood of developing upper GI bleeding. Concomitant use of SSRIs and NSAIDs was associated with a 75% increased risk of upper GI bleeding. The plausible explanation as to why the addition of SSRIs cause this increased risk for bleeding is theorized to be related to the lack of serotonin uptake in platelets7. Measured serotonin inside platelets has been minimal when patients are on SSRIs due to the inability of platelets to reuptake serotonin while on these medications7. When hemorrhage occurs, the release of serotonin by platelets induces vasoconstriction and enhances platelet aggregation by reducing the size of the vessel lumen and potentiating the effect of adenosine diphosphate (ADP)86 (1997)."),9, 301€“306 (2008)."). SSRIs indirectly prevent this serotonin release and physiological hemostasis becomes compromised. The mechanism of UGIB in NSAIDs therapy is different from SSRIs. Through multiple factors play a role in mucosal injury but inhibition of COX-1 and COX-2 remains the major factor. By this inhibition NSAIDs therapy prevents the synthesis of cytoprotective prostaglandins E2 and prostacyclin. Both of these enzymes control majority of mucosal defense and recovery and are potent vasodilators. Depletion of these enzymes compromises the secretion of bicarbonate and mucus in both stomach and duodenum and mucosal blood flow10. NSAIDs use increases risk upper GI bleeding upwards of fourfold11. By compromising physiologic hemostasis by two independent mechanisms, concomitant use of SSRI and NSAIDs further increase the risk of upper GI bleeding. If the medication combination is unavoidable, it is important for clinicians to consider adding a protective agent such as a proton-pump inhibitor to minimize the risk of bleed, especially in patients with other traditional risk factors (advanced age, male, NSAID use, history of peptic ulcer disease)12. The magnitude in increase in risk of bleeding with concomitant SSRI and NSAID use comparable to that previously reported in the setting of concomitant NSAID use and H. pylori infection13. Use of PPI co-therapy with SSRI has been associated with significantly lower risk of upper GI bleeding as seen in a meta-analysis done by Targownik14. This study also suggested that the addition of PPIs may help reduce the risk of GI bleeding when SSRIs and NSAIDs are used together. However, further study is needed to verify this effect. Conclusion Use of SSRIs in patients on NSAIDs increases risk of upper GI bleeding significantly. Clinicians need to weigh the risk and benefits on adding SSRI therapy if NSAIDs cannot be discontinued. The use of acid suppressing agents such as PPIs can be considered in patients taking concomitant SSRI with NSAIDs. However, further investigation of patients on SSRIs and NSAIDs with and without PPIs needs to be established before determining the true risk of developing an upper GI bleed. Data availability Data is available in articles referenced. Abbreviations SSRI: : Selective serotonin inhibitors NSAID: : Non-steroidal anti-inflammatory drugs UGIB: : Upper gastrointestinal bleeding NOS: : Newcastle€“Ottawa Scale References Wilkins, T., Wheeler, B. & Carpenter, M. Upper gastrointestinal bleeding in adults: Evaluation and management. Am. Fam. Phys. 101(5), 294€“300 (2020). Erratum in: Am. Fam. Phys. 103(2), 70 (2021). Bjarnason, I. et al. Mechanisms of damage to the gastrointestinal tract from nonsteroidal anti-inflammatory drugs. Gastroenterology 154(3), 500€“514. (2018). Article CAS PubMed Google Scholar 3. Bismuth-Evenzal, Y. et al. Decreased serotonin content and reduced agonist-induced aggregation in platelets of patients chronically medicated with SSRI drugs. J Affect Disord. 136(1€“2), 99€“103. (2012). Article CAS PubMed Google Scholar 4. Halperin, D. & Reber, G. Influence of antidepressants on hemostasis. Dialog. Clin. Neurosci. 9, 47€“59 (2007). Article Google Scholar 5. Anglin, R. et al. Risk of upper gastrointestinal bleeding with selective serotonin reuptake inhibitors with or without concurrent nonsteroidal anti-inflammatory use: A systematic review and meta-analysis. Am. J. Gastroenterol. 109(6), 811€“819. (2014). Article CAS PubMed Google Scholar 6. Wells, G. A. et al. The Newcastle€“Ottawa Scale (NOS) for assessing the quality of nonrandomised studies in meta-analyses (2000). 7. Maurer-Spurej, E., Pittendreigh, C. & Solomons, K. The influence of selective serotonin reuptake inhibitors on human platelet serotonin. Thromb. Haemost. 91, 119€“128 (2004). Article CAS Google Scholar 8. Li, N., Wallén, N. H., Ladjevardi, M. & Hjemdahl, P. Effects of serotonin on platelet activation in whole blood. Blood Coagul. Fibrinolysis Int. J. Haemost. Thromb. 8(8), 517€“523. (1997). Article CAS Google Scholar 9. Jonnakuty, C. & Gragnoli, C. What do we know about serotonin?. J. Cell. Physiol. 217(2), 301€“306 (2008). Article CAS Google Scholar 10. Tomisato, W. et al. Role of direct cytotoxic effects of NSAIDs in the induction of gastric lesions. Biochem. Pharmacol. 67, 575€“585 (2004). Article CAS Google Scholar Massó González, E. L., Patrignani, P., Tacconelli, S. & García Rodríguez, L. A. Variability among nonsteroidal antiinflammatory drugs in risk of upper gastrointestinal bleeding. Arthritis Rheum. 62(6), 1592€“1601 (2010). Article Google Scholar 12. Lanza, L. F., Chan, F. K. L. & Quigley, E. M. M. Guidelines for prevention of NSAID-related ulcer complications. Am. J. Gastroenterol. 104(3), 728€“738 (2009). PubMed Google Scholar 13. Huang, J. Q., Sridhar, S. & Hunt, R. H. Role of Helicobacter pylori infection and non-steroidal anti-inflammatory drugs in peptic-ulcer disease: A meta-analysis. Lancet 359(9300), 14€“22 (2002). Article CAS Google Scholar 14. Targownik, L. E., Bolton, J. M., Metge, C. J., Leung, S. & Sareen, J. Selective serotonin reuptake inhibitors are associated with a modest increase in the risk of upper gastrointestinal bleeding. Am. J. Gastroenterol. 104(6), 1475€“1482 (2009). Article CAS Google Scholar Download references Author information Authors and Affiliations Department of Internal Medicine, CHI Health Creighton University Medical Center-Bergan Mercy, 7710 Mercy Rd, Suite 301, Omaha, NE, 68124, USA Syed Mobashshir Alam, Mohammed Qasswal & Muhammad Junaid Ahsan 2. Department of Clinical Research and Public Health, Creighton University School of Medicine, CHI Health Creighton University Medical Center-Bergan Mercy, 7710 Mercy Rd, Suite 502, Omaha, NE, 68124, USA Ryan W. Walters & Subhash Chandra Authors Syed Mobashshir Alam View author publications Search author on:PubMed Google Scholar 2. Mohammed Qasswal View author publications Search author on:PubMed Google Scholar 3. Muhammad Junaid Ahsan View author publications Search author on:PubMed Google Scholar 4. Ryan W. Walters View author publications Search author on:PubMed Google Scholar 5. Subhash Chandra View author publications Search author on:PubMed Google Scholar Contributions S.A., M.Q., and S.C. wrote the main manuscript text. M.A. helped with study selection and review. R.W. created figures, tables, and wrote the €œResults€ section of the manuscript. Corresponding author Correspondence to Syed Mobashshir Alam. Ethics declarations Competing interests The authors declare no competing interests. Additional information Publisher's note Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations. Rights and permissions Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit Reprints and permissions About this article Cite this article Alam, S.M., Qasswal, M., Ahsan, M.J. et al. Selective serotonin reuptake inhibitors increase risk of upper gastrointestinal bleeding when used with NSAIDs: a systemic review and meta-analysis. Sci Rep 12, 14452 (2022). Download citation Received: Accepted: Published: DOI: Share this article Anyone you share the following link with will be able to read this content: Sorry, a shareable link is not currently available for this article. Provided by the Springer Nature SharedIt content-sharing initiative This article is cited by Deprescribing of antidepressants: development of indicators of high-risk and overprescribing using the RAND/UCLA Appropriateness Method Vita Brisnik Jochen Vukas Puya Younesi BMC Medicine (2024) Search Advanced search Quick links Explore articles by subject Find a job Guide to authors Editorial policies Sign up for the Nature Briefing newsletter €” what matters in science, free to your inbox daily. Get the most important science stories of the day, free in your inbox. Sign up for Nature Briefing
189324
https://www.youtube.com/watch?v=0BdExLqS5mU
Waves and Sound Note 12: Open and Closed Air Columns Jeff Shaw 1030 subscribers 1028 likes Description 70879 views Posted: 5 May 2014 Preston Physics Grade 11 Waves and Sound Note 12: Open and Closed Air Columns 71 comments Transcript: Introduction Preston physics grade 11 waves and sound note 12 open and closed air columns when we're looking at air columns what we're looking at is a standing wave that's in the middle of a pipe now this standing wave is set up because of resonance remember that resonance is when something's free to vibrate at a natural frequency when energy is put into the system so when we put energy into a pipe we get a standing wave and it becomes an air column now we can either have a closed end air column where we have a node at the end because the waves have to come to a completion at the end or we have an open-ended air column where we get a Crest at the end now when we look at the patterns Harmonics that occur they're called harmonics when we're looking at air columns so with an open-ended air column first the first harmonic occurs when we have half of a wavelength because the waves go in and a halfway up we get our first Crest that's the first harmonic the next harmonic is going to occur at a full wavelength the wave goes up and then comes back down and creating another harmonic we call that the second harmonic we then look at the third harmonic which is going to be 1 and 1 half wavelengths or 3 two we get one and a half wavelengths through and we get the third harmonic now when we're looking at a closed end air column this is a little bit different our first harmonic for a closed end actually occurs at a quarter wavelength we then actually skip the second harmonic and go right to the third harmonic because the next harmonic occurs at 3/4 of a wavelength so we get 3/4 of a wavelength that's going to be the third harmonic it's always the number on the top of the fraction is what we call for our harmonic so here on the next harmonic we actually get 5 over four wavelengths which is the fifth Equations harmonic now we can develop equations to find the frequency in both of these patterns looking at what occurred over and over and over again for an open-ended air column we can have our frequences equal to n time V over 2L for a Clos end it's frequency equals n V over 4 L where f is our frequency and hert N is the desired number for the harmonic we're looking for V is the speed of sound and L is the length of the pipe now one important thing to note here is that if something's asking for the fundamental frequency that's when n equal 1 so Nal one again is the fundamental frequency of the air column we're looking at if we look at the first example we have a culvert that's 1.23 M long and it's 20° C outside and we want to find the frequencies of the first three harmonics in both a closed and open air column first we'll do the open air column that's part A so if we're looking at this we're looking at numbers 1 2 and three for an open air column so first we have the frequency equals 1 the speed of sound divided by 2 times the length of the air column so we get 139.0 Hertz for our frequency 2 or our second harmonic we do the same thing but we Sub in two instead of one and we get 279.50 following the same pattern the third harmonic would be 49.3 Hertz now with the closed one we're going to find using one again as our first number here for our desired harmonic but in the second time we're looking at actually the third harmonic because remember closed air columns go 1 3 and then five so subbing in three we get 209.000 Hertz and then for five we get 349.88 and 29 from your yellow duang
189325
https://www.ncbi.nlm.nih.gov/Taxonomy/Browser/wwwtax.cgi?mode=info&id=10090
EntrezPubMedNucleotideProteinGenomeStructurePMCTaxonomyBioCollections Search for as - [x] lock levels using filter: Taxonomy browser (Mus musculus) Mus musculus -------------------------------------------------------------------------------------------------------------------------- Taxonomy ID: 10090 (for references in articles please use NCBI:txid10090) current name Mus musculusLinnaeus, 1758 includes: Balb/c mouse LK3 transgenic mice Mus sp. 129SV nude mice transgenic mice Genbank common name: house mouse NCBI BLAST name: rodents Rank: species Genetic code: Translation table 1 (Standard) Mitochondrial genetic code: Translation table 2 (Vertebrate Mitochondrial) Other names: common name(s)mouse Lineage( full )cellular organisms; Eukaryota; Opisthokonta; Metazoa; Eumetazoa; Bilateria; Deuterostomia; Chordata; Craniata; Vertebrata; Gnathostomata; Teleostomi; Euteleostomi; Sarcopterygii; Dipnotetrapodomorpha; Tetrapoda; Amniota; Mammalia; Theria; Eutheria; Boreoeutheria; Euarchontoglires; Glires; Rodentia; Myomorpha; Muroidea; Muridae; Murinae; Mus; MusEntrez records Database name Subtree links Direct links Nucleotide11,190,03210,579,020 Protein387,356377,599 Structure9,8819,875 Conserved Domains2222 GEO Datasets2,480,2342,479,201 PubMed Central42,64442,237 Gene251,852251,757 HomoloGene19,03219,032 SRA Experiments2,780,0342,766,045 GEO Profiles50,177,62450,177,624 Protein Clusters1313 Identical Protein Groups222,712219,811 BioProject94,82794,414 BioSample2,778,0002,764,382 Datasets Genome10490 PubChem BioAssay233,488233,484 Taxonomy181 Comments and References: image:Mus musculus Image by Ilmari Karonen from Wikimedia Commons under a Public Domain license. Image may not have been verified for accuracy by NCBI Taxonomy. Note: Some journals identify organisms only with vernacular names like "mouse" or "rats". Currently, an organism idenitified as "mouse" will be found under the name "Mus sp." although we are aware that most of the corresponding sequence records are likely to be from "Mus musculus". If it is important to see all records for "Mus musculus", you should therefore consider retrieving the records for "Mus sp." as well. Genome Information See the NCBI Genome homepage Go to NCBI genomic BLAST page for Mus musculus External Information Resources (NCBI LinkOut) LinkOut Subject LinkOut Provider Mus musculustaxonomy/phylogeneticAnimal Diversity Web 2 records from this provider taxonomy/phylogeneticAnimalBase Mus musculus taxonomytaxonomy/phylogeneticArctos Specimen Database DNA barcoding : Mus musculustaxonomy/phylogeneticBarcodes of Life DigiMorphimagesDigital Morphology 166 records from this provider supplemental materialsDryad Digital Repository - Access Curated Datasets Mus musculus Linnaeus, 1758taxonomy/phylogeneticEncyclopedia of life GOLD: 31 Organismsorganism-specificGenomes On Line Database Show Biotic Interactionstaxonomy/phylogeneticGlobal Biotic Interactions Related Immune Epitope Informationgene/protein/disease-specificImmune Epitope Database and Analysis Resource 639370800: Mus musculus C57BL/6Jorganism-specificIntegrated Microbial Genomes Mus musculus Linnaeus, 1758taxonomy/phylogeneticIntegrated Taxonomic Information System Mus musculustaxonomy/phylogeneticLifemap Mus musculus Linnaeus 1758taxonomy/phylogeneticMammal Species of the World MamTextaxonomy/phylogeneticMammals of Texas 2 records from this provider taxonomy/phylogeneticNCBI taxonomy bookmarks 2 records from this provider taxonomy/phylogeneticNHGRI genome proposal white papers OMAtaxonomy/phylogeneticOMA Browser: Orthologous MAtrix Mus musculus Linnaeus, 1758taxonomy/phylogeneticOcean Biogeographic Information System UCSCgbsequence screening/similarity/alignmentUCSC Genome Browser 2 records from this provider organism-specificWebScipio - eukaryotic gene identification Mus musculus Linnaeus, 1758taxonomy/phylogeneticWorld Register of Marine Species 17 records from provider organism-specificdiArk - a resource for eukaryotic genome research Notes: Groups interested in participating in the LinkOut program should visit the LinkOut home page. A list of our current non-bibliographic LinkOut providers can be found here. Information from sequence entries Show organism modifiers Disclaimer: The NCBI taxonomy database is not an authoritative source for nomenclature or classification - please consult the relevant scientific literature for the most reliable information. Reference: How to cite this resource - Schoch CL, et al. NCBI Taxonomy: a comprehensive update on curation, resources and tools. Database (Oxford). 2020: baaa062. PubMed: 32761142 PMC: PMC7408187. Comments and questions to info@ncbi.nlm.nih.gov [Help][Search]NLM[NIH][Disclaimer]
189326
https://youngchemist.com/images/mimage010.htm
Standard Atomic Weights (2013) ▼▼▼▼▼▼▼▼▼▼▼▼ Standard Atomic Weights (2013) Standard atomic weights are recommended values of relative atomic masses of the elements revised biennially by the IUPAC Commission on Atomic Weights and Isotopic Abundances and applicable to elements in any normal sample with a high level of confidence. A normal sample is any reasonably possible source of the element or its compounds in commerce for industry and science and has not been subject to significant modification of isotopic composition within a geologically brief period. (Source: IUPAC Gold Book) Relative atomic mass or atomic weight is the ratio of the average mass of the atom to the unified atomic mass unit where unified atomic mass unit (symbol: u) is non-SI unit of mass equal to the atomic mass constant defined as one twelfth of the mass of a carbon-12 atom in its ground state and used to express masses of atomic particles (u is approximately equal to 1.6605402(10)×10^-27 kg and is also equal to dalton (symbol: Da), non-SI unit of mass often used in biochemistry and molecular biology). The recommended symbol of relative atomic mass is A r where A is printed in italic (sloping) type and modified by the subscript r printed in Roman (upright) type. For elements with no stable isotopes denoted by u in above table, individual isotopic masses can be found in other tables. Back to image selection Back to HomePage
189327
https://enthu.com/blog/chemistry/avogadros-number-related-periodic-table?srsltid=AfmBOorTM7JTtd5blrfRZl6ISclXi45CTjsV85vhVVlrec1xSh-kgx8-
Avogadro’s Number Explained | Periodic Table Insights About Our Charter Courses Live Learning Co-horts Enthu App Membership LightDark Search chemistry Avogadro’s Number Explained | Periodic Table Insights Explore Avogadro’s number, its significance in chemistry, and its connection to the periodic table. Learn how it simplifies atomic and molecular calculations! ByPK Jan 25, 2023 The atomic mass listed in the periodic table represents the mass of one mole of atoms, which is equal to Avogadro’s number of atoms. Avogadro’s number (6.022 × 10²³) acts as a bridge between the microscopic world of atoms and the macroscopic quantities we use in chemistry. It allows us to relate the tiny mass of individual atoms to measurable amounts in the laboratory. What We Will Learn In this article, we will explore three important concepts in chemistry: Atomic Mass, Mole, and Avogadro’s Number. Understanding these terms will help us see how chemists calculate the amount of substances, compare different elements, and analyze chemical reactions in a simple and systematic way. Why is Avogadro's Number 1. Atomic Mass 2. Mole- Definition 3. Avogadro’s Number 4. How is Avogadro's Number Sample Calculation Key Takeaways Conclusion FAQs 1. What is the relationship between moles and Avogadro's number? 2. What is Avogadro's number? 3. How is Avogadro's number related to atomic mass? 4. How does Avogadro's number help in understanding the chemical behavior of elements? Why is Avogadro's Number Related to the Numbers on the Periodic Table? Avogadro's number is important because it acts as a bridge between very large numbers and familiar, manageable units. Avogadro's number essentially allows us to write the mass of one mole of a substance in small numbers (the molecular weight). It also allows us to express the ratios of reactants and products in a chemical equation. This simplifies calculations significantly. The periodic table is a tabular arrangement of chemical elements, organized on the basis of their atomic number, electron configurations, and chemical properties. The number of atoms of an element in one mole of that element is represented by the atomic mass of the element, which is typically given in atomic mass units (amu) or grams per mole (g/mol). To convert the atomic mass of an element in amu to the number of atoms in one mole of that element, you would multiply the atomic mass by Avogadro's number (6.022 x 1023). Therefore, Avogadro's number is related to the numbers on the periodic table because it is used to convert the atomic mass of an element to the number of atoms in one mole of that element. 1. Atomic Mass Each element's atoms have a characteristic number of protons, which determines which atom we are looking at. As a result, the atomic number refers to the number of protons in an atom. The number of neutrons for a given element, on the other hand, can differ. Atomic mass is the total mass of matter particles in an atom, which is the sum of the masses of protons, neutrons, and electrons in an atom. However, electrons are so small that they are insignificant while calculating an atom's mass. As a result, the atomic mass of a single atom is equal to the total number of protons and neutrons. 2. Mole- Definition The amount of substance is a measurement of how many elementary entities of a given substance are present in an object or sample. The mole is defined as having 6.022140761023 elementary entities. An elementary entity can be an atom, a molecule, an ion, an ion pair, or a subatomic particle such as an electron, depending on the substance. For example, despite having different volumes and masses, 10 moles of water (a chemical compound) and 10 moles of mercury (a chemical element) contain equal amounts of substance, and the mercury contains exactly one atom for each molecule of water. 3. Avogadro’s Number The number of units of any substance in one mole is known as Avogadro's number. It is also known as Avogadro's constant. It is named after Amedeo Avogadro, who made significant contributions to the field of chemistry. Avogadro's number is a fixed number that equals 6.02214076 x 1023When used as a constant proportionality factor, the number is dimensionless (no units). Avogadro's number is one mole, so this is basically the same as asking how big a mole is. Avogadro's number can be applied to anything: 4. How is Avogadro's Number Related to the Numbers on the Periodic Table? Using Avogadro's Number to Calculate Atomic Mass The number of particles in one mole of anything is known as Avogadro's number. It is the number of atoms in one mole of an element in this context. Using Avogadro's number, it is simple to calculate the mass of a single atom. To obtain the answer in grams, divide the element's relative atomic mass by Avogadro's number. The same procedure is used to determine the mass of a single molecule. It can be done by adding all of the atomic masses in the chemical formula and dividing by Avogadro's number. 💡 Also Refer: What is Molar Mass? Sample Calculation Determine the mass of a single carbon (C) atom in grams. Solution The atomic mass of carbon = 12.01 grams of one mole of carbon. 1 mole of carbon = 6.022 x 10 23 atoms of carbon (Avogadro's number). This relationship is then used to 'convert' a carbon atom to grams using the following ratio: Relationship Between Avogadro's Number and Periodic Table Numbers To calculate the mass of one atom, enter the atomic mass of carbon: mass of 1 C atom = 1.994 x 10- mass of 1 C atom = 1.994 x 10-23g. Key Takeaways The atomic number is the number of protons in an atom. The atomic mass is the total number of protons and neutrons in an atom. A mole is a unit of measurement used in chemistry. The mole and Avogadro's number connect the atom or molecule (microscopic) to the amount of substance used in laboratories (macroscopic). The number 6.022 x 10 23 of anything is Avogadro's number. A mole of a substance has a specific mass and tends to take up a specific volume. Conclusion In summary, crafting high-quality content is essential for engaging your audience and achieving your blogging goals. By focusing on clarity, relevance, and consistency, you can build a loyal readership and enhance your blog's impact. Remember to continually adapt and refine your strategies based on feedback and analytics to ensure your blog remains dynamic and appealing. Keep writing, stay inspired, and watch your blog flourish! FAQs 1. What is the relationship between moles and Avogadro's number? The relationship between moles and Avogadro's number is that Avogadro's number is used to define the number of entities (such as atoms, ions, or molecules) in one mole of a substance. One mole of a substance is equal to Avogadro's number of entities, which is approximately 6.022 x 10 23. 2. What is Avogadro's number? Avogadro's number, also known as Avogadro's constant, is a fundamental constant of physics and chemistry that is used to define the number of atoms or molecules in a mole of a substance. It is approximately 6.022 x 10 23 atoms or molecules per mole. 3. How is Avogadro's number related to atomic mass? Avogadro's number is used to convert the atomic mass of an element to the number of atoms in one mole of that element. To do this, you would multiply the atomic mass of the element in atomic mass units (amu) by Avogadro's number (6.022 x 10 23). This can also be used to convert the mass of an element in grams to the number of atoms present in the sample. 4. How does Avogadro's number help in understanding the chemical behavior of elements? Avogadro's number helps to understand the chemical behavior of elements because it allows us to compare the number of atoms or molecules of different elements on a consistent scale. By knowing Avogadro's number, we can compare the number of atoms or molecules in a mole of one element to the number of atoms or molecules in a mole of another element, which can help to explain the chemical behavior of these elements and their reactions with other elements. 💡 Authored by- Shilendradp Pawar Newsletter Comments Explore Tags pianochemistrychessguitarhigh-schoolcalculatornewslettershindieventsabacus About Global People to People Live Learning Network. Learn and Teach anything under the Sun! Recent Posts Magazine – August Edition: Rakhi Tales, Little Krishnas & Cultural Beats! ByEngagement Experts 🦋 Chess Battle - Themed: Where Strategy Meets Story! ByEngagement Experts 🦋 Winter Camp Wonders: Fun, Learning & Memories ByEngagement Experts 🦋 Tag Cloud pianochemistrychessguitarhigh-schoolcalculatornewslettershindieventspublic-speakingabacushealth-lifestylecodingartCookingBakingmathnutritionforeign-languagecompetitionstournamentsyogaAIDIYdaily-sunshine JobsPrivacy PolicyTerms & Conditions Notice at collectionYour Privacy Choices We're online Questions?
189328
https://www.math3ma.com/blog/classifying-surfaces
- March 16, 2016 • Topology Classifying Surfaces (CliffsNotes Version) The Basic Idea My goal for today is to provide a step-by-step guideline for classifying closed surfaces. (By 'closed,' I mean a surface that is compact and has no boundary.) The information below may come in handy for any topology student who needs to know just the basics (for an exam, say, or even for other less practical (but still mathematically elegant) endeavors) so there won't be any proofs today. Given a polygon with certain edges identified, we can determine the surface that it represents in just three easy steps: Step 1: Find the Euler characteristic Step 2: Determine if it's orientable or non-orientable Step 3: Calculate its genus In the next section, I'll fill in the details. We'll use the following surface as our working example: ‍ ‍ From English to Math Step 1: Find the Euler Characteristic Let's call our surface Mg, where g is its genus (which we have yet to determine). You recall that the Euler characteristic χ(Mg) is given by χ(Mg)=#vertices−#edges+#faces. Determining the number of edges is easy - you simply count the number of unique edges in the polygon. In our case, the edges are labeled a,b,c and d. Four letters implies four edges. Also, there is exactly one face (the octagon's blue shaded region). Hence e=4andf=1. Counting vertices is the only tricky part, and it boils down to "chasing arrows" according to how the edges are glued together. The following video clip shows how this is done with our blue octagon: Since our surface has 2 vertices, we conclude that the Euler characteristic is χ(Mg)=v−e+f=2−4+1=−1. ‍ Step 2: Determine Orientability Next, we need to determine if our surface is orientable or non-orientable. In sum, any surface that does NOT contain a Möbius band is orientable. any surface that DOES contain a Möbius band is non-orientable (This really follows from the fact that a projective plane is the same as a Möbius band and disk glued together along their boundaries. Remember our discussion on the projective plane? And speaking of Möbius bands, have you ever seen Vihart's tale of Wind and Mr. Ug?) So how do we determine if our surface contains a Möbius band? It's actually quite easy! You just sort of "eyeball it": Since we found a Möbius band (Yes, yes, we actually found two, but that's okay. It's enough to find at least one.) our surface is non-orientable. As a side note, notice that in our polygon's surface symbol, abca−1dc−1db, the inverse of neither b nor d is present. You'll also notice that two the Möbius bands we found above are associated with edges b and d. This is not a coincidence. ‍ Step 3: Calculate the genus Finally, it remains to determine our surface's genus g. This step is straightforward. We simply use the fact that χ(Mg)={2−2g,if Mg is orientable;2−g,if Mg is non-orientable. In our example, Mg is non-orientable and χ(Mg)=−1, hence g=2−χ(Mg)=2−(−1)=3. ‍ The Classification Theorem We now have all the information needed to determine our surface! We simply apply the Classification Theorem: The Classification Theorem: Any closed surface is homeomorphic to one of the following: a sphere a connected sum of tori a connected sum of projective planes We can summarize this along with our observations about orientability as follows: An orientable surface of genus g is a connected sum of g tori : T#T#⋯#T A non-orientable surface of genus g is a connected sum of g projective planes: RP2#RP2#⋯#RP2 In our example, we found that Mg is a non-orientable surface of genus 3. Hence it is a connected sum of three projective planes (a.k.a. cross caps), RP2#RP2#RP2. Voila! ‍ As an aside, it's helpful to note that RP2#RP2#RP2≅T#RP2. This tells us that any surface that is a connected sum of both tori and projective planes can be written as a connected sum of projective planes only. To see why the above holds, we have the following two claims: Claim 1: RP2#RP2≅K Proof: Recall that "RP2#RP2" means: "remove a disk from both copies of RP2 and glue the remaining spaces together." But from previous work we know that RP2−disc = Möbius band! In other words, RP2#RP2 is precisely the space consisting of two Möbius bands glued together along their boundary. But this is precisely a Klein bottle! Claim 2: T#RP2≅K#RP2. Proof: This is a direct consequence of the following observation: T#M≅K#M where M is a Möbius band. The drawing below, borrowed from The Shape of Space by Jeffrey Weeks, illustrates this wonderfully: It follows that T#RP2≅K#RP2 since we can glue a disc to the left and right hand sides of T#M≅K#M (and, as we mentioned in the proof of Claim 1, gluing a disc to a Möbius band gives us a projective plane.) Combining Claims 1 and 2 we see that T#RP2≅K#RP2≅RP2#RP2#RP2 as stated above. ‍ Note, a sphere has genus 0. Share Related Posts ### Comparing Topologies The Back Pocket ### A New Perspective of Entropy in ### On Connectedness, Intuitively in Topology ### The Fundamental Group of the Circle, Part 6 in Topology Leave a comment!
189329
https://mitocw.ups.edu.ec/courses/physics/8-01sc-classical-mechanics-fall-2016/readings/MIT8_01F16_chapter8.1.pdf
8.1 Force Laws There are forces that don't change appreciably from one instant to another, which we refer to as constant in time, and forces that don't change appreciably from one point to another, which we refer to as constant in space. The gravitational force on an object near the surface of the earth is an example of a force that is constant in space. There are forces that depend on the configuration of a system. When a mass is attached to one end of a spring, the spring force acting on the object increases in strength whether the spring is extended or compressed. There are forces that spread out in space such that their influence becomes less with distance. Common examples are the gravitational and electrical forces. The gravitational force between two objects falls off as the inverse square of the distance separating the objects provided the objects are of a small dimension compared to the distance between them. More complicated arrangements of attracting and repelling interactions give rise to forces that fall off with other powers of r : constant, 1/ r , 1/ r 2 , 1/ r3, …,. A force may remain constant in magnitude but change direction; for example the gravitational force acting on a planet undergoing circular motion about a star is directed towards the center of the circle. This type of attractive central force is called a centripetal force. A force law describes the relationship between the force and some measurable property of the objects involved. We shall see that some interactions are describable by force laws and other interactions cannot be so simply described. 8.1.1 Hooke’s Law In order to stretch or compress a spring from its equilibrium length, a force must be exerted on the spring. Consider an object of mass m that is lying on a horizontal surface. Attach one end of a spring to the object and fix the other end of the spring to a wall. Let l0 denote the equilibrium length of the spring (neither stretched or compressed). Assume 1 that the contact surface is smooth and hence frictionless in order to consider only the effect of the spring force. If the object is pulled to stretch the spring or pushed to compress the spring, then by Newton’s Third Law the force of the spring on the object is equal and opposite to the force that the object exerts on the spring. We shall refer to the force of the spring on the object as the spring force and experimentally determine a relationship between that force and the amount of stretch or compress of the spring. Choose a coordinate system with the origin located at the point of contact of the spring and the object when the spring-object system is in the equilibrium configuration. Choose the ˆ i unit vector to point in the direction the object moves when the spring is being stretched. Choose the coordinate function x to denote the position of the object with respect to the origin (Figure 8.1). l0 x = 0 ˆ i frictionless wall m equilibrium configuration + m surface x stretched: x > 0 l0 ˆ i m x = 0 x ˆ i compressed: x < 0 x = 0 Figure 8.1 Spring attached to a wall and an object Initially stretch the spring until the object is at position x . Then release the object and measure the acceleration of the object the instant the object is released. The   F = m a magnitude of the spring force acting on the object is . Now repeat the experiment for a range of stretches (or compressions). Experiments show that for each spring, there is a range of maximum values x max > 0 for stretching and minimum values < 0 for compressing such that the magnitude of the measured force is proportional to xmin the stretched or compressed length and is given by the formula ! F = k x , (8.1.1) 2 where the spring constant k has units N ⋅ m−1 . The free-body force diagram is shown in Figure 8.2. ˆ i x F = F x ˆ i = kxˆ i x = 0 Figure 8.2 Spring force acting on object The constant k is equal to the negative of the slope of the graph of the force vs. the compression or stretch (Figure 8.3). F slope = -k x x max xmin x . Figure 8.3 Plot of x -component of the spring force Fx vs. x The direction of the acceleration is always towards the equilibrium position whether the spring is stretched or compressed. This type of force is called a restoring force. Let Fx denote the x -component of the spring force. Then Fx = −kx . (8.1.2) Now perform similar experiments on other springs. For a range of stretched lengths, each spring exhibits the same proportionality between force and stretched length, although the spring constant may differ for each spring. It would be extremely impractical to experimentally determine whether this proportionality holds for all springs, and because a modest sampling of springs has confirmed the relation, we shall infer that all ideal springs will produce a restoring force, which is linearly proportional to the stretched (or compressed) length. This experimental relation regarding force and stretched (or compressed) lengths for a finite set of springs has now been inductively generalized into the above mathematical model for ideal springs, a force law known as a Hooke’s Law. This inductive step, referred to as Newtonian induction, is the critical step that makes physics a predictive science. Suppose a spring, attached to an object of mass m, is 3 stretched by an amount Δx . Use the force law to predict the magnitude of the force  F = k Δx between the rubber band and the object, , without having to experimentally measure the acceleration. Now use Newton’s Second Law to predict the magnitude of the acceleration of the object   F k Δx a = = . (8.1.3) m m Carry out the experiment, and measure the acceleration within some error bounds. If the magnitude of the predicted acceleration disagrees with the measured result, then the model for the force law needs modification. The ability to adjust, correct or even reject models based on new experimental results enables a description of forces between objects to cover larger and larger experimental domains. Many real springs have been wound such that a force of magnitude F 0 must be applied F before the spring begin to stretch. The value of s 0 is referred to as the pre-tension of the spring. Under these circumstances, Hooke’s law must be modified to account for this pretension, ⎧ ⎪ ⎨ ⎪ ⎩ Fx = −F 0 − kx, x > 0 . (8.1.4) Fx = +F 1 − kx, x < 0 F Note the value of the pre tension -0 spring. and F 1 may differ for compressing or stretching a 4 MIT OpenCourseWare 8.01 Classical Mechanics Fall 2016 For Information about citing these materials or our Terms of Use, visit:
189330
https://digitalcommons.wayne.edu/cgi/viewcontent.cgi?article=1132&context=jmasm
Journal of Modern Applied Statistical Methods Volume 6 | Issue 1 Article 15 5-1-2007 Better Binomial Confidence Intervals James F. Reed III Lehigh Valley Hospital and Health Network Follow this and additional works at: Part of the Applied Statistics Commons, Social and Behavioral Sciences Commons, and the Statistical Theory Commons This Regular Article is brought to you for free and open access by the Open Access Journals at DigitalCommons@WayneState. It has been accepted for inclusion in Journal of Modern Applied Statistical Methods by an authorized editor of DigitalCommons@WayneState. Recommended Citation Reed, James F. III (2007) "Better Binomial Confidence Intervals," Journal of Modern Applied Statistical Methods : Vol. 6 : Iss. 1 , Article 15. DOI: 10.22237/jmasm/1177992840 Available at: Journal of Modern Applied Statistical Methods Copyright © 2007 JMASM, Inc. May, 2007, Vol. 6, No. 1, 153-161 1538 – 9472/07/$95.00 153 Better Binomial Confidence Intervals James F. Reed III Lehigh Valley Hospital & Health Network The construction of a confidence interval for a binomial parameter is a basic analysis in statistical inference. Most introductory statistics textbook authors present the binomial confidence interval based on the asymptotic normality of the sample proportion and estimating the standard error - the Wald method. For the one sample binomial confidence interval the Clopper-Pearson exact method has been regarded as definitive as it eliminates both overshoot and zero width intervals. The Clopper-Pearson exact method is the most conservative and is unquestionably a better alternative to the Wald method. Other viable alternatives include Wilson's Score, the Agresti-Coull method, and the Borkowf SAIFS-z. Key words: Binomial distribution, confidence intervals, coverage probability, Wald method, Clopper-Pearson Method, Score Method, Agresti-Coull method. Introduction The International Committee of Medical Journal editors indicated that confidence intervals are preferred over simple point estimates and p-values. This applies to over 300 international medical/scientific journals. Most introductory statistics textbook authors present the binomial confidence interval based on the asymptotic normality of the sample proportion and estimating the standard error. This approximate method is referred to as the Wald interval. In order to avoid approximation, some advanced statistics textbooks recommend the Clopper-Pearson exact binomial confidence interval. Other methods, asymptotic as well as exact, have been proposed and appear sporadically in introductory textbooks. There is a rather large James Reed III is the Interim Chief of Health Studies and Director of Research at Lehigh Valley Hospital and Health Network. He has published over 100 journal articles and book chapters. His interests include applied statistical analyses, medical education, and statistical methods in simulation studies. Email: James_F.Reed@lvh.com set of articles, primarily in the statistics literature, about these and other less common methods of constructing binomial confidence intervals. The purpose of this article is to provide a review of alternatives to the Wald method for computing a binomial confidence interval and provide a set of tractable and better methods of constructing binomial confidence intervals for a single proportion. Methodology When a binomial confidence interval is reported, the computational method is rarely given. This may imply that there is only one standard method for computing a binomial confidence interval - the Wald method (W). The Wbinomial confidence interval, either with or without a continuity correction, is found in every introductory statistics text. Typically, a warning or rule of thumb for determining when not to use W is included, but usually ignored. Occasionally, the Wald with a continuity correction (WCC) is included. For a single proportion the W and WCC lower bound (LB) and upper bound (UB) are defined as: BETTER BINOMIAL CONFIDENCE INTERVALS 154 W LB = p − zα/2 √[pq/n] W UB = p + z α/2 √[pq/n], WCC LB = p − (z α/2 √[pq/n]+1/(2n)) WCC UB = p + (z α/2 √[pq/n]+1/(2n)) where p = r/n, q = 1-p, r=number of successes, and n is the total sample size. Even though these two confidence interval methods are similar to large-sample formulas for means, both the W and WCC confidence intervals behave poorly in terms of zero width intervals and overshoot (Beal, 1987; Vollset, 1993; Newcombe, 1998; Pires, 2002; Rieczigel, 2003; Agresti, 2003). For instance, when r=0 or n, W and WCC have zero width or degenerate confidence intervals. Despite the known poor performance of the W and WCC confidence intervals, they continue to dominate in statistics textbooks, typically accompanied by warnings that when np is small, usually less than 5 or 10, exact or score methods should be used. A slightly different version of the rule of thumb requires that npq should be greater than or equal to 5. A better rule is to not compute confidence bounds for a proportion using the W method but rather to use one of the better methods. For small proportions the calculated lower bound can be below zero. Conversely, when aproportion approaches one, such as in the sensitivity and specificity of diagnostic or screening tests, and the upper bound may exceed one. This overshoot is avoided by truncating the interval to lie within [0, 1]. Overshoot and zero width confidence intervals may be avoided by a variety of better methods. One of the standard measures of binomial confidence interval performance is the coverage probability, C( π|n, α). Given X=k,n, and α, let δ(π|k,n, α)=1 if π ∈ [LB(k,n, α), UB(k,n, α)], and δ(π|k,n, α)=0 otherwise. Then, C( π|n, α) for a given π is: C( π|n, α)= Σ P(X=k|n, π) δ(π|k,n, α)Figure 1 shows the 95% confidence interval coverage probability of the standard Wald methods {W, WCC} as a function of π, π ∈ [0,1], for n=20. The coverage probability curves demonstrate the subnomial coverage for values of π near 0 and 1. The Clopper-Pearson (CP) binomial confidence interval is the best-known exact method for interval estimation and is considered by most to be the gold standard (Clopper & Pearson, 1934). The CP confidence interval eliminates overshoot and zero width intervals and is strictly conservative. The CP lower and upper limits are defined by inverting the exact binomial tests with equal-tailed acceptance regions. CP LB=0 if x=0, ( α/2) 1/n if x=n. LB=[1+(n −r+1)/(r × F 2r, 2(n −r+ 1), 1 −α /2 )] -1 CP UB=1-( α/2) 1/n if x=0, 1 if x=n. UB=[1+(n −r)/(r × F 2(r+1), 2(n −r), α/2 )] -1 Fleiss (1981) preferred a more computationally intense binomial confidence interval with a continuity correction (SCC) attributed to Wilson (Wilson, 1927). For a single proportion, Wilson's Score (S) and Wilson's Score with continuity correction (SCC) LB and UB are defined as: S LB=(2np+z 2−z√{z 2 +4npq})/2(n+z 2 )S UB=(2np+z 2 +z √{z 2 +4npq})/2(n+z 2 )SCC LB = [2np+z 2−1−z√{z 2−2−1/n+4p(nq+1)}]/(2n+2z 2 )SCC UB = [2np+z 2 +1+z √{z 2 +2 −1/n+4p(nq-1)}]/(2n+2z 2 )Blyth and Still (1983) investigated the performance of W, WCC, CP, Sterne's binomial confidence interval method (Sterne, 1954), and Pratt's (P) approximate confidence interval method (Pratt, 1968). Their results demonstrate the need for a continuity correction even when n is large. Blythe and Still then suggested amodification to W (WBS). While the WBS was an improvement over W and WCC, they concluded that it still was not JAMES F. REED III 155 satisfactory. The LB and UB for WBS are defined as: LB = p − [z/ √(n-z 2 -2z/ √n-1/n][ √(pq)+1/2n], except LB=0 when r=0. UB = p + [z/ √(n-z 2 -2z/ √n-1/n][ √(pq)+1/2n], except UB=1 for r=n. Vollset (Vollset, 1993) compared thirteen methods for computing binomial confidence intervals using evaluative criteria of C(P), interval width, and errors relative to limits. Vollset proposed a mean Pratt (MP), amodification of P that is a closed form approximation to the mid-P exact interval. Define the UB of P as: P UB=[1+(r+1)/(n-r)) 2 ((A-b)/c) 3 ]-1 ,with A=81(r+1)(n-r)-9n-8, B=3z √[9(r+1)(n-r)(9n+5z 2 )+n+1], 0.080 0.290 0.500 0.710 0.920 P 0.8000 0.8500 0.9000 0.9500 1.0000 W Wald CP 0.080 0.290 0.500 0.710 0.920 P 0.8000 0.8500 0.9000 0.9500 1.0000 WCC Wald CC CP Figure 1. Coverage Probabilities (n=20) for the Wald and Wald CC Binomial Confidence Interval Methods. BETTER BINOMIAL CONFIDENCE INTERVALS 156 and C=81(r+1)2-9(r+1)(2+z 2 )+1. For P LB, replace r with r-1 and z with -z. The Vollset MP lower and upper bound are then defined as: MP LB={P l (r)+P l (r+1)}/2, MP UB={P u (r)+P u (r-1)}/2 Vollset argued that W and WCC were unsatisfactory and the Clopper-Pearson, Pratt's approximation, SCC, MP, S and SCC are methods that may be safely used in all applications. Newcombe (1998) compared seven methods for constructing two-sided binomial confidence intervals (W, WCC, S, SCC, Clopper-Pearson, mid-P and a likelihood-based method). The W and WCC were quickly judged as being inadequate, highly anti-conservative, asymmetrical in coverage, and incurred a higher risk of unacceptable boundary limits. Newcombe argued that neither W nor WCC should be acceptable methods for the scientific literature since other methods are tractable and all perform much better. Newcombe further argued that the use of the simple asymptotic standard error of a proportion should be restricted to sample size planning and introductory teaching purposes. Newcombe preferred three methods: the Clopper-Pearson method, the Score method and mid-P binomial based method. Agresti and Coull, in noting the poor performance of the Wald interval and conservativeness of the Clopper-Pearson interval, proposed a straightforward adjustment -the add 4 to Wald. They suggested that by simply adding two successes and two failures and then use the Wald formula. Alternatively, one could add z 2 /2 successes and z 2 /2 failures before computing the Wald confidence interval. The latter is preferred. The Agresti-Coull adjusted Wald (AC) lower and upper bounds are: LB=p' −z√[p'q'/n'], UB=p'+z √[p'q'/n'], where p'=(2r+z 2 )/(2n+z 2 ), and n'=n+z 2Pires (2002) compared twelve methods for constructing confidence intervals for abinomial proportion and concluded that a clear classification of conservative methods included the Clopper-Pearson, the Score, and two arcsine transformation methods. A second tier of recommended confidence interval construction methods included a Bayesian method and the SCC. Agresti (2003) argued for reducing the effects of discreteness in binomial confidence intervals by inverting two-sided tests rather than two one-sided tests. In most statistical practice, for interval estimation of a proportion or adifference or ratio of proportions, the inversion of the asymptotic score test is the best choice. If one wants to be a bit more conservative, mid-P adaptations or the Clopper-Pearson are recommended. For teaching purposes, the Wald-type interval plus and minus a normal-score multiple of a standard error is simplest. Reiczigel compared four methods for constructing binomial confidence intervals: Wilson's Score, Agresti and Coull Adjusted Wald, the Clopper-Pearson, the mid-P, and Sterne's interval (Rieczigel, 2003). Unique to this study is the recommendation of using the Sterne interval and the Agresti-Coull adjusted Wald interval for binomial confidence intervals. Tobi et al. (2005) compared the performance of seven approximate methods and the exact Copper-Pearson exact confidence intervals for small proportions. Three criteria were used to evaluate the performance of confidence intervals; coverage, confidence interval width, and aberrant confidence intervals. They concluded that: (1) one should JAMES F. REED III 157 compute confidence intervals for small proportions even when the number of events equals zero, (2) report what method has been used for confidence interval calculation, (3) the W method should be discarded, and (4) the Clopper-Pearson and the SCC are the best choices to calculate confidence intervals for small proportions. Borkowf (2005) argued that even though the Agresti-Coull method binomial confidence intervals are substantially better than the Wald method, it can yield sub nominal coverage for some values of π for moderate sample sizes. A binomial confidence interval, which results in near nominal coverage and is easy to calculate by first augmenting the original data with a single imaginary failure to compute the lower confidence bound and a single imaginary success to compute the upper confidence bound is proposed - a single augmentation with an imaginary failure or success (SAIFS) method. The lower and upper SAIFS confidence bounds are then: SAIFS LB = p 1 - ξ1-α/2 √[p 1 q1 /n] and UB = p 2 + ξ1-α/2 √[p 2 q2 /n], with p 1 =(r + 0)/(n+1) and p 2 =(r+1)/(n+1) Borkowf (2005) evaluated two forms of the SAIFS. The first uses the z-quantiles ( ξ1-α/2 )and the second used the t-quantiles ( τn-1, 1-α/2 ). Compared to the Clopper-Pearson method, the SAIFS method using either the z or t quantiles results in confidence intervals with mean widths that are narrower for proportion parameters near 0 or 1 and whose coverage probabilities are marginally better over all values of π. The SAIFS-Z is preferred. Figure 2 shows the 95% confidence interval coverage probability as a function of π, π ∈ [0,1], for n=20 for CP, WBS, S, SCC, AC, and SAIFS-Z. Note that the sawtooth appearance of the coverage functions is due to the discontinuities for values of p corresponding to any lower or upper limits in the set of n+1 confidence intervals. The Clopper-Pearson and Borkowf SAIFS-z methods give at least nominal coverage for all values of π ∈ [0,1], with severe over coverage near 0 and 1. The Score CC method gives at least nominal coverage for all values of π ∈ [0,1] and avoids the over coverage of either the Clopper-Pearson or Score methods. The Score and Agresti-Coull methods yield nearly nominal coverage for all values of π ∈ [0,1]. Conclusion For the one sample binomial confidence interval, a new generation of introductory and medical statistics textbooks should emphasize the poor performance properties of W, WCC and include better binomial confidence methods. At least one from the set of Clopper-Pearson, S, SCC, Agresti-Coull, or the SAIFS-z methods should be mentioned. With the widespread use of laptop computers and access to computing resources on the internet, the complexity of computing binomial confidence intervals should not be an issue. The question remains as to which method to use. The Clopper-Pearson exact method has been regarded as definitive as it eliminates both overshoot and zero width intervals. The Clopper-Pearson exact method is the most conservative and is unquestionably a better alternative to the W when constructing and reporting binomial confidence intervals. In terms of programming ease, the Clopper-Pearson is easily programmed as are the Blythe & Still, Wilson's Score, Score with a continuity correction, the Agresti-Coull method, and the Borkowf SAIFS-z. BETTER BINOMIAL CONFIDENCE INTERVALS 158 0.000 0.250 0.500 0.750 1.000 P 0.8000 0.8500 0.9000 0.9500 1.0000 CP Clopper-Pearson CP 0.000 0.250 0.500 0.750 1.000 P 0.8000 0.8500 0.9000 0.9500 1.0000 S Score CP 0.050 0.350 0.650 0.950 P 0.8000 0.8500 0.9000 0.9500 1.0000 SCC Score CC CP Figure 2. Coverage Probabilities (n=20) for the Clopper-Pearson, Score, Score CC, Agresti-Coull, and Borkowf SAIFS-z Binomial Confidence Interval Methods. JAMES F. REED III 159 0.000 0.250 0.500 0.750 1.000 P 0.8000 0.8500 0.9000 0.9500 1.0000 AC Agresti-Coull CP 0.000 0.250 0.500 0.750 1.000 P 0.8000 0.8500 0.9000 0.9500 1.0000 SAIFSZ Borkowf SAIFS-z CP Figure 2 (Continued). Coverage Probabilities (n=20) for the Clopper-Pearson, Score, Score CC, Agresti-Coull, and Borkowf SAIFS-z Binomial Confidence Interval Methods. BETTER BINOMIAL CONFIDENCE INTERVALS 160 Table 1. Methods for Calculation of Confidence Intervals for a Single Proportion Method Formula Clopper-Pearson CP LB=0 if x=0, ( α/2) 1/n if x=n. LB=[1+(n −r+1)/(r × F 2r, 2(n −r+ 1), 1 −α /2 )] -1 UB=1-( α/2) 1/n if x=0, 1 if x=n. UB=[1+(n −r)/(r × F 2(r+1), 2(n −r), α/2 )] -1 Score (Wilson) S LB=(2np+z 2−z√{z 2 +4npq})/2(n+z 2 )UB=(2np+z 2 +z √{z 2 +4npq})/2(n+z 2 )Score (w/CC) SCC LB=[2np+z 2−1−z√{z 2−2−1/n+4p(nq+1)}]/(2n+2z 2 )UB=[2np+z 2 +1+z √{z 2 +2 −1/n+4p(nq-1)}]/(2n+2z 2 )Agresti-Coull AC LB=p' −z√[p'q'/n'] UB=p'+z √[p'q'/n'], where p'=(2r+z 2 )/(2n+z 2 ), and n'=n+z 2 .Borkowf SAIFS LB = p 1 - ξ1-α/2 √[p 1 q1 /n] UB = p 2 + ξ1-α/2 √[p 2 q 2 /n], with p 1 =(r + 0)/(n+1) and p 2 =(r+1)/(n+1), where ξ1-α/2 are z-quantiles or τn-1, 1-α/2 the t-quantiles JAMES F. REED III 161 References Agresti A. & Coull B. A. (1998). Approximate is better than 'exact' for interval estimation of binomial proportions. The American Statistician, 52 , 119-126. Agresti, A. & Min, Y. (2001). On small-sample confidence intervals for parameters in discrete distributions. Biometrics , 57 , 963-71. Agresti, A. (2003). Dealing with discreteness: Making 'exact' confidence intervals for proportions, differences of proportions, and odds ratios more exact. Statistical Methods Medical Research, 12 , 3-21. Blyth, C. R. & Still, H. A. (1983). Binomial confidence intervals. Journal of the American Statistical Association, 78 , 108-116. Bonett, D. G. & Price, R. M. (2005). Confidence intervals for a ratio of binomial proportions based on paired data. Statistical Methods Medical Research,15. Borkowf, C. B. (2005). Constructing binomial confidence intervals with near nominal coverage by adding a single imaginary failure or success. Statistical Methods Medical Research , 25 .Clopper, C. J. & Pearson, E. S. (1934). The use of confidence or fiducial limits illustrated in the case of the binomial. Biometrika, 26 , 404-413. Fleiss, J. H. (1981). Statistical methods for rates and proportions (2 nd Ed.). New York: John Wiley & Sons. Newcombe, R. G. (1998). Two-sided confidence intervals for the single proportion: Comparison of seven methods. Statistical Methods Medical Research, 17 , 857-72. Pires, A. M. (2002). Confidence intervals for a binomial proportion: Comparison of methods and software evaluation .Proceedings of the Conference ComStat 2002. Pratt, J. W. (1968). A normal approximation for binomial, F, Beta, and other common, related tail probabilities. Journal of the American Statistical Association, 63 , 1457-1483. Radhakrishna, S., Murthy, B. N., Nair, N. G. K., Jayabal, P., & Jayasri, R. (1992). Confidence intervals in medical research. Indian Journal of Medical Research [B], 96 ,199-205. Reiczigel, J. (2003). Confidence intervals for the binomial parameter: Some new considerations. Statistical Methods Medical Research, 22 , 611-21. Sterne, T. E. (1954). Some remarks on confidence or fiducial limits'. Biometrika, 41 ,275-278. Tobi, H., van den Berg, P. B., & deJong-van den Berg, L. T. W. (2005). Small proportions: What to report for confidence intervals. Pharmacoepidemiology and Drug Safety, 14 , 239-247. Vollset, S. E. (1993). Confidence intervals for a binomial proportion. Statistical Methods Medical Research, 12 , 809-24. Wilson, E. B. (1927). Probable inference, the law of succession, and statistical inference. Journal of the American Statistical Association, 22, 209-212.
189331
https://teachy.ai/en/lesson-plan/middle-school/8th-grade/mathematics-en/expository-methodology-or-linear-equations-comparison-or-lesson-plan
Log In Lesson plan of Linear Equations: Comparison Lara from Teachy SubjectMathematics Mathematics SourceOriginal Teachy Original Teachy TopicLinear Equations: Comparison Linear Equations: Comparison Objectives (5-7 minutes) Understanding of the concept of linear equations: Students should be able to understand what a linear equation is and how it is used to represent a comparison situation. They should grasp that a linear equation is an equality between two expressions, both linear, which can be true or false depending on the values assigned to the variables. Application of the substitution method: Students need to be able to apply the substitution method in solving linear equations. They should understand how to replace a variable in an equation and how to simplify the resulting equation. Solving comparison problems: Students should be able to solve comparison problems using linear equations. They should be able to translate a comparison situation into a linear equation, and then solve the equation to find the solution. Secondary Objectives: Development of logical and analytical thinking: By working with linear equations, students will have the opportunity to develop their logical and analytical thinking skills. They will need to analyze the problem, identify the relevant variables, and use logic to solve the equation. Practice of basic math skills: Solving linear equations involves using several basic math skills, such as simplifying expressions, operating with negative numbers, and fractions. Students will have the opportunity to practice these skills as they work with linear equations. Introduction (10-15 minutes) Review of previous content: The teacher should start the lesson by briefly reviewing the previous content that is essential for understanding the current topic. This may include the definition of equations, variables, and linear expressions, as well as the methods of solving linear equations that have been studied before. This review should be done interactively, with questions and answers to engage students and ensure they have a solid understanding of these concepts. (3-5 minutes) Presentation of problem situations: Next, the teacher should present two problem situations that involve comparing quantities. For example, a problem about comparing ages or a problem about comparing prices of products could be given. The teacher should explain that the aim is to use linear equations to solve these problems. (3-5 minutes) Contextualization of the topic's importance: The teacher should then contextualize the importance of the topic by explaining that the ability to solve linear equations is fundamental in many areas of life and work. For instance, linear equations are used in economics to model supply and demand of products, in engineering to design structures, and in sciences to predict the behavior of physical systems. The teacher can give specific examples of how linear equations are used in different contexts. (2-3 minutes) Introduction of the topic: Finally, the teacher should introduce the topic of the lesson - linear equations of comparison - explaining that this is a fundamental topic that students need to understand in order to continue studying algebra. The teacher should emphasize that while linear equations may seem complicated at first, they become easier with practice and that students will have many opportunities to practice during the lesson. (2-3 minutes) Development (20-25 minutes) Theory - Content Presentation (10-12 minutes) Definition of linear equations of comparison: The teacher should start by explaining that a linear equation of comparison is a mathematical expression that represents a comparison situation between two or more quantities. He/she should give examples of linear equations of comparison, such as "2x + 3 = 7" (where x represents an unknown quantity) and "2y - 5 = 3y + 1" (where y and the unknown quantity). Explanation of the substitution method: The teacher should then explain the substitution method, which is a technique used to solve linear equations. He/she should explain that the substitution method involves replacing an expression in one equation with another expression that is equal to it. The teacher should give examples of how to use the substitution method to solve linear equations, step by step. Discussion about the importance of solving linear equations: The teacher should discuss why solving linear equations is important. He/she should explain that solving linear equations is a fundamental skill in mathematics and is used in many areas of life and work. Practice - Content Application (10-13 minutes) Linear equation solving activity: The teacher should then provide students with a number of linear equations of comparison to solve. The equations should be of increasing difficulty, starting with simple equations and gradually increasing the complexity. The teacher should circulate around the room, providing assistance and clarifying doubts as needed. Group discussion: After the students have had sufficient time to solve the equations, the teacher should lead a group discussion about the solutions. He/she should ask students to explain how they arrived at their answers and what strategies they used to solve the equations. The teacher should encourage students to ask questions and to offer suggestions for improving the solutions. Theory - Content Review (5-7 minutes) Recap of the content: The teacher should then recap the main points of the lesson. He/she should highlight the definition of linear equations of comparison, the substitution method, and the importance of solving linear equations. The teacher can use additional examples to reinforce these concepts. Questions and answers: The teacher should then open the floor for questions and answers. He/she should answer any questions students may have and clarify any misconceptions that may have arisen during the lesson. Feedback (8-10 minutes) Review and Reflection (3-4 minutes) The teacher should begin this stage by asking students to revisit the solutions they found for the given problems. He/she should remind them that there is not just one correct way to solve a linear equation but rather several possible approaches. This encourages students to think critically about their own solutions and to consider other possibilities. Next, the teacher should ask reflection questions, such as: "What was the most important concept you learned today?" and "What questions do you still have?" These questions encourage students to think about what they have learned and to identify any areas that may need further review. Connection to the Real World (2-3 minutes) The teacher should then explain how the topic of the lesson connects to the real world. For example, he/she could talk about how linear equations are used to model real-world situations, such as the relationship between time and distance in uniform motion, or the relationship between supply and demand in economics. The teacher could also ask students to think of other examples of how linear equations can be used in everyday life or in different fields of study. This helps reinforce the relevance of the topic and stimulates students' critical thinking. Feedback and Closure (3-4 minutes) The teacher should wrap up the lesson by asking for students' feedback on the lesson. He/she could ask what they enjoyed about the lesson, what they found most challenging, and what they would like to see more of in future lessons. This allows the teacher to make adjustments to future lessons based on students' feedback. Finally, the teacher should summarize the key points of the lesson and explain what will be covered in the next lesson. He/she should encourage students to review the lesson material at home and to ask any questions they may have. The teacher should also remind students of any homework or reading that may have been assigned. Conclusion (5-7 minutes) Content Recap (2-3 minutes) The teacher should begin the Conclusion by recapping the main points of the lesson. This includes the definition of linear equations, the substitution method, and the application of solving linear equations in comparison situations. He/she can do this through a quick interactive review, asking students to provide the definitions or explain the processes. This helps reinforce the knowledge gained and ensures that students have a solid understanding of the concepts. Connection Between Theory, Practice, and Applications (1-2 minutes) Next, the teacher should highlight how the lesson connected theory, practice, and applications. He/she could mention how the theoretical presentation of the concept of linear equations of comparison was followed by practicing how to solve such equations. Additionally, the teacher could point out how the skill of solving linear equations is applied in several areas of life and work, such as in economics, engineering, and sciences. Extra Materials (1-2 minutes) The teacher should then suggest extra materials for students who wish to enhance their knowledge of the topic. This could include recommended math books, math websites, explanatory videos, and linear equation solver apps. For example, the teacher could suggest that students practice solving linear equations using a particular app or that they watch an online explanatory video that demonstrates the substitution method in a clear and detailed way. Importance of the Topic in Everyday Life (1 minute) Finally, the teacher should emphasize the importance of the lesson topic for everyday life. He/she could explain that while linear equations may seem abstract, they are used in many aspects of daily life. For instance, linear equations are used to solve comparison problems like calculating discounts in stores, predicting the duration of a journey based on the average speed, or determining the amount of ingredients needed for a recipe. Therefore, the ability to solve linear equations is a practical and useful skill that students can apply in many real-life situations. Need more materials to teach this subject? I can generate slides, activities, summaries, and over 60 types of materials. That's right, no more sleepless nights here :) Users who viewed this lesson plan also liked... Lesson plan Algorithms and Problems: Medium | Lesson Plan | Active Learning Lara from Teachy - Lesson plan Point, Line, and Plane | Lesson Plan | Teachy Methodology Lara from Teachy - Lesson plan Circles: Introduction | Lesson Plan | Active Learning Lara from Teachy - Lesson plan Spatial Geometry: Deformations in Projections | Lesson Plan | Teachy Methodology Lara from Teachy - Join a community of teachers directly on WhatsApp Connect with other teachers, receive and share materials, tips, training, and much more! We reinvent teachers' lives with artificial intelligence TeachersStudentsSchools ToolsSlidesQuestion BankLesson plansLessonsActivitiesSummariesBooks 2025 - All rights reserved Terms of Use Privacy Notice | Cookies Notice |
189332
https://achievethecore.org/coherence-map/4/18/186/186
Measurement And Data Solve Problems Involving Measurement And Estimation Of Intervals Of Time, Liquid Volumes, And Masses Of Objects. Major Cluster 3.MD.A.2 Measure and estimate liquid volumes and masses of objects using standard units of grams (g), kilograms (kg), and liters (l). Add, subtract, multiply, or divide to solve one-step word problems involving masses or volumes that are given in the same units, e.g., by using drawings (such as a beaker with a measurement scale) to represent the problem. Excludes compound units such as cm3 and finding the geometric volume of a container. Excludes multiplicative comparison problems (problems involving notions of “times as much”; see Glossary, Table 2) Number And Operations-Fractions Develop Understanding Of Fractions As Numbers. Major Cluster 3.NF.A.1 Understand a fraction) 1/b as the quantity formed by 1 part when a whole is partitioned into b equal parts; understand a fraction a/b as the quantity formed by a parts of size 1/b. Grade 3 expectations in this domain are limited to fractions with denominators 2, 3, 4, 6, and 8. Number And Operations-Fractions Develop Understanding Of Fractions As Numbers. Major Cluster 3.NF.A.2 Understand a fraction) as a number on the number line; represent fractions on a number line diagram). Grade 3 expectations in this domain are limited to fractions with denominators 2, 3, 4, 6, and 8. Operations And Algebraic Thinking Represent And Solve Problems Involving Multiplication And Division. Major Cluster 3.OA.A.1 Interpret products of whole numbers), e.g., interpret 5 × 7 as the total number of objects in 5 groups of 7 objects each. For example, describe a context in which a total number of objects can be expressed as 5 × 7. Operations And Algebraic Thinking Understand Properties Of Multiplication And The Relationship Between Multiplication And Division. Major Cluster 3.OA.B.6 Understand division as an unknown-factor problem. For example, find 32 ÷ 8 by finding the number that makes 32 when multiplied by 8. Operations And Algebraic Thinking Use The Four Operations With Whole Numbers To Solve Problems. Major Cluster 4.OA.A.2 Multiply or divide to solve word problems involving multiplicative comparison, e.g., by using drawings and equations with a symbol for the unknown number to represent the problem, distinguishing multiplicative comparison from additive comparison. See Glossary, Table 2 Measurement And Data Solve Problems Involving Measurement And Conversion Of Measurements From A Larger Unit To A Smaller Unit. Supporting Cluster 4.MD.A.1 Know relative sizes of measurement units within one system of units including km, m, cm; kg, g; lb, oz.; l, ml; hr, min, sec. Within a single system of measurement, express measurements in a larger unit in terms of a smaller unit. Record measurement equivalents in a two- column table. For example, know that 1 ft is 12 times as long as 1 in. Express the length of a 4 ft snake as 48 in. Generate a conversion table for feet and inches listing the number pairs (1, 12), (2, 24), (3, 36), ... Operations And Algebraic Thinking Represent And Solve Problems Involving Multiplication And Division. Major Cluster 3.OA.A.3 Use multiplication and division within 100) to solve word problems in situations involving equal groups, arrays, and measurement quantities, e.g., by using drawings and equations with a symbol for the unknown number to represent the problem. See Glossary, Table 2 Operations And Algebraic Thinking Represent And Solve Problems Involving Multiplication And Division. Major Cluster 3.OA.A.4 Determine the unknown whole number) in a multiplication or division equation relating three whole numbers. For example, determine the unknown number that makes the equation true in each of the equations 8 × ? = 48, 5 = ? ÷ 3, 6 × 6 = ?. Number And Operations-Fractions Build Fractions From Unit Fractions By Applying And Extending Previous Understandings Of Operations On Whole Numbers. Major Cluster 4.NF.B.4 Apply and extend previous understandings of multiplication to multiply a fraction) by a whole number). Grade 4 expectations in this domain are limited to fractions with denominators 2, 3, 4, 5, 6, 8, 10, 12, and 100. 4.NF.B.4.a Understand a fraction) a/b as a multiple of 1/b. For example, use a visual fraction model) to represent 5/4 as the product 5 × (1/4), recording the conclusion by the equation 5/4 = 5 × (1/4). 4.NF.B.4.b Understand a multiple of a/b as a multiple of 1/b, and use this understanding to multiply a fraction) by a whole number). For example, use a visual fraction model) to express 3 × (2/5) as 6 × (1/5), recognizing this product as 6/5. (In general, n × (a/b) = (n × a)/b.) 4.NF.B.4.c Solve word problems involving multiplication of a fraction) by a whole number), e.g., by using visual fraction models) and equations to represent the problem. For example, if each person at a party will eat 3/8 of a pound of roast beef, and there will be 5 people at the party, how many pounds of roast beef will be needed? Between what two whole numbers does your answer lie? Example Task 4.NF Extending Multiplication From Whole Numbers to Fractions Provided by Illustrative Mathematics Task Write a story problem that can be solved by finding . Draw two different diagrams that show that . Explain how your diagrams represent . Which of the diagrams you used to represent can be used to represent ? Draw the diagram if possible. Solution Part 1. The student may create a context that uses the “groups of” representation, for example saying that there are five bags, and each bag has 4 apples. Together, there are 20 apples if you combine all of the five bags. Depending on the representations they choose, the student might also say that finding the area of a rectangle with side lengths 4 units and 5 units would be an appropriate context. Students may write a multiplicative comparison problem; for example, “Suzy has 4 lollipops and Lily has 5 times as many lollipops.” Part 2. Students might draw diagrams based on sets, length (such as a number line or bar diagrams) or area that represent equal groups. They also might draw a multiplicative comparison diagram. See examples below. The first diagram shows 5 groups of 4, which totals 20. The second diagram shows 5 rectangles of length 4 put end to end, which has a total length of 20. The third diagram shows a rectangle with side lengths 5 and 4 and area 20. The fourth diagram shows two rectangles; one is of length 4 and the other is 5 times as long with a total length of 20. Part 3. All of diagrams can be adapted; see below. Download Example Task Tasks The Penny Assessments Fraction Concepts Mini-Assessment Smarter Balanced Assessment Item Illustrating 4.NF.B.4 NWEA Assessment Item Illustrating 4.NF.B.4.b NWEA Assessment Item Illustrating 4.NF.B.4.c Focus in Grade 4 Number And Operations-Fractions Apply And Extend Previous Understandings Of Multiplication And Division To Multiply And Divide Fractions. Major Cluster 5.NF.B.4 Apply and extend previous understandings of multiplication to multiply a fraction) or whole number) by a fraction. Number And Operations-Fractions Apply And Extend Previous Understandings Of Multiplication And Division To Multiply And Divide Fractions. Major Cluster 5.NF.B.7 Apply and extend previous understandings of division to divide unit fractions) by whole numbers) and whole numbers by unit fractions. Students able to multiply fractions in general can develop strategies to divide fractions in general, by reasoning about the relationship between multiplication and division. But division of a fraction by a fraction is not a requirement at this grade. Measurement And Data Solve Problems Involving Measurement And Conversion Of Measurements From A Larger Unit To A Smaller Unit. Supporting Cluster 4.MD.A.2 Use the four operations to solve word problems involving distances, intervals of time, liquid volumes, masses of objects, and money, including problems involving simple fractions) or decimals, and problems that require expressing measurements given in a larger unit in terms of a smaller unit. Represent measurement quantities using diagrams such as number line diagrams) that feature a measurement scale. Send Feedback
189333
https://www.britannica.com/animal/godwit
godwit Our editors will review what you’ve submitted and determine whether to revise the article. godwit, any of four species of large, long-billed shorebirds of the genus Limosa, family Scolopacidae, named for its whistling call. Godwits are generally reddish brown in summer and grayish in winter; all nest in the Northern Hemisphere. The black-tailed godwit (L. limosa), about 40 centimetres (16 inches) long including the bill, has a black-banded, white tail. The bill is long and straight. The black-tailed godwit, which breeds in Iceland and on wet plains across Eurasia, is the emblem of the Netherlands Ornithological Union. In North America a smaller form, the Hudsonian godwit (L. haemastica), declined in population from overshooting to an estimated 2,000 survivors, but it may be reviving. The other North American form, the marbled godwit (L. fedoa), with slightly upturned bill and pinkish brown underwings, is fairly common; it undergoes little seasonal colour change. Slightly smaller is the bar-tailed godwit (L. lapponica), of the Eurasian and Alaskan tundra. Some members of the subspecies L. lapponica bauri are capable of migrating nonstop from Alaska to New Zealand.
189334
https://math.stackexchange.com/questions/1938172/complex-analyis-verifying-harmonic
Complex analyis, verifying harmonic - Mathematics Stack Exchange Join Mathematics By clicking “Sign up”, you agree to our terms of service and acknowledge you have read our privacy policy. Sign up with Google OR Email Password Sign up Already have an account? Log in Skip to main content Stack Exchange Network Stack Exchange network consists of 183 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. Visit Stack Exchange Loading… Tour Start here for a quick overview of the site Help Center Detailed answers to any questions you might have Meta Discuss the workings and policies of this site About Us Learn more about Stack Overflow the company, and our products current community Mathematics helpchat Mathematics Meta your communities Sign up or log in to customize your list. more stack exchange communities company blog Log in Sign up Home Questions Unanswered AI Assist Labs Tags Chat Users Teams Ask questions, find answers and collaborate at work with Stack Overflow for Teams. Try Teams for freeExplore Teams 3. Teams 4. Ask questions, find answers and collaborate at work with Stack Overflow for Teams. Explore Teams Teams Q&A for work Connect and share knowledge within a single location that is structured and easy to search. Learn more about Teams Hang on, you can't upvote just yet. You'll need to complete a few actions and gain 15 reputation points before being able to upvote. Upvoting indicates when questions and answers are useful. What's reputation and how do I get it? Instead, you can save this post to reference later. Save this post for later Not now Thanks for your vote! You now have 5 free votes weekly. Free votes count toward the total vote score does not give reputation to the author Continue to help good content that is interesting, well-researched, and useful, rise to the top! To gain full voting privileges, earn reputation. Got it!Go to help center to learn more Complex analyis, verifying harmonic Ask Question Asked 9 years ago Modified9 years ago Viewed 330 times This question shows research effort; it is useful and clear 0 Save this question. Show activity on this post. \begingroup My question is: Verify for each given function u is harmonic (in the region where it is defined) and then find a harmonic conjugate of u. u = \mbox{Im}(e^{z^2}) I know how to find the harmonic conjugate, but how should my proof of the function being harmonic look like? Thanks in advance! complex-analysis Share Cite Follow Follow this question to receive notifications edited Sep 23, 2016 at 10:53 Robert Z 148k 12 12 gold badges 110 110 silver badges 193 193 bronze badges asked Sep 23, 2016 at 8:29 StratbomberStratbomber 57 1 1 silver badge 8 8 bronze badges \endgroup 1 \begingroup My confusion is that I'm first supposed to prove that u is harmonic, and then find the harmonic conjugate, normally, I would like to find the harmonic conjugate, then check if they obey the laplace equation, but if I do that here, when the question is looking like this, it feels wrong\endgroup Stratbomber –Stratbomber 2016-09-23 08:32:07 +00:00 Commented Sep 23, 2016 at 8:32 Add a comment| 1 Answer 1 Sorted by: Reset to default This answer is useful 0 Save this answer. Show activity on this post. \begingroup Hint. Let z=x+iy, then u(x,y)=\mbox{Im}(\exp(z^2))=\mbox{Im}(\exp(x^2-y^2+2ixy))=e^{x^2-y^2}\sin(2xy). In order to verify that it is harmonic check if the Laplacian of u is identically zero \frac{\partial^2 u}{\partial^2 x}+\frac{\partial^2 u}{\partial^2 y}=0. Note that e^{z^2} is a holomorphic function in \mathbb{C} so the harmonic conjugate will be -\mbox{Re}(\exp(z^2))=-e^{x^2-y^2}\cos(2xy). Share Cite Follow Follow this answer to receive notifications edited Sep 23, 2016 at 8:41 answered Sep 23, 2016 at 8:33 Robert ZRobert Z 148k 12 12 gold badges 110 110 silver badges 193 193 bronze badges \endgroup 3 \begingroup thanks for your answer Robert Z, I think I was a bit unclear in my question, but how do I prove that u is a harmonic function?\endgroup Stratbomber –Stratbomber 2016-09-23 08:34:49 +00:00 Commented Sep 23, 2016 at 8:34 \begingroup@Stratbomber Check if the laplacian of u is identically zero.\endgroup Robert Z –Robert Z 2016-09-23 08:39:23 +00:00 Commented Sep 23, 2016 at 8:39 \begingroup ahh, okay, thanks, dont know why I had such a hard time with this last time!\endgroup Stratbomber –Stratbomber 2016-09-23 08:48:53 +00:00 Commented Sep 23, 2016 at 8:48 Add a comment| You must log in to answer this question. Start asking to get answers Find the answer to your question by asking. Ask question Explore related questions complex-analysis See similar questions with these tags. Featured on Meta Introducing a new proactive anti-spam measure Spevacus has joined us as a Community Manager stackoverflow.ai - rebuilt for attribution Community Asks Sprint Announcement - September 2025 Related 2A few Questions about Harmonic Functions 6Is it a harmonic function or not? 1Proving -u is a harmonic conjugate for v 0Complex analysis. proving given function is harmonic on the region it is defined 1Finding the harmonic conjugate of a function 0Does having harmonic functions u(x,y) and v(x,y) guarantee having an analytic function? 0Conjugate of complex Harmonic function? 0True or false: Every harmonic function on \Bbb C \setminus {0} has a harmonic conjugate. Hot Network Questions Is it possible that heinous sins result in a hellish life as a person, NOT always animal birth? Cannot build the font table of Miama via nfssfont.tex Is it safe to route top layer traces under header pins, SMD IC? ICC in Hague not prosecuting an individual brought before them in a questionable manner? Direct train from Rotterdam to Lille Europe How many stars is possible to obtain in your savefile? Is it ok to place components "inside" the PCB Explain answers to Scientific American crossword clues "Éclair filling" and "Sneaky Coward" Is direct sum of finite spectra cancellative? Does the curvature engine's wake really last forever? PSTricks error regarding \pst@makenotverbbox How to start explorer with C: drive selected and shown in folder list? For every second-order formula, is there a first-order formula equivalent to it by reification? How long would it take for me to get all the items in Bongo Cat? Non-degeneracy of wedge product in cohomology Any knowledge on biodegradable lubes, greases and degreasers and how they perform long term? Storing a session token in localstorage If Israel is explicitly called God’s firstborn, how should Christians understand the place of the Church? Is existence always locational? в ответе meaning in context Making sense of perturbation theory in many-body physics Can I go in the edit mode and by pressing A select all, then press U for Smart UV Project for that table, After PBR texturing is done? With with auto-generated local variables Discussing strategy reduces winning chances of everyone! more hot questions Question feed Subscribe to RSS Question feed To subscribe to this RSS feed, copy and paste this URL into your RSS reader. Why are you flagging this comment? It contains harassment, bigotry or abuse. This comment attacks a person or group. Learn more in our Code of Conduct. It's unfriendly or unkind. This comment is rude or condescending. Learn more in our Code of Conduct. Not needed. This comment is not relevant to the post. Enter at least 6 characters Something else. A problem not listed above. Try to be as specific as possible. Enter at least 6 characters Flag comment Cancel You have 0 flags left today Mathematics Tour Help Chat Contact Feedback Company Stack Overflow Teams Advertising Talent About Press Legal Privacy Policy Terms of Service Your Privacy Choices Cookie Policy Stack Exchange Network Technology Culture & recreation Life & arts Science Professional Business API Data Blog Facebook Twitter LinkedIn Instagram Site design / logo © 2025 Stack Exchange Inc; user contributions licensed under CC BY-SA. rev 2025.9.26.34547
189335
https://www.doubtnut.com/qna/277386514
Geometric mean of 4 and 9 is ________. The correct Answer is:6 To find the geometric mean of the numbers 4 and 9, we can follow these steps: Step 1: Understand the formula for geometric mean The geometric mean (GM) of two numbers a and b is given by the formula: GM=√a⋅b Step 2: Identify the numbers In this case, we have: - a=4 - b=9 Step 3: Substitute the values into the formula Now, we substitute the values of a and b into the formula: GM=√4⋅9 Step 4: Calculate the product Next, we calculate the product of 4 and 9: 4⋅9=36 Step 5: Take the square root Now, we take the square root of the product: GM=√36 Step 6: Simplify the square root The square root of 36 is: √36=6 Final Answer Thus, the geometric mean of 4 and 9 is: Geometric Mean=6 --- To find the geometric mean of the numbers 4 and 9, we can follow these steps: Step 1: Understand the formula for geometric mean The geometric mean (GM) of two numbers a and b is given by the formula: GM=√a⋅b Step 2: Identify the numbers In this case, we have: - a=4 - b=9 Step 3: Substitute the values into the formula Now, we substitute the values of a and b into the formula: GM=√4⋅9 Step 4: Calculate the product Next, we calculate the product of 4 and 9: 4⋅9=36 Step 5: Take the square root Now, we take the square root of the product: GM=√36 Step 6: Simplify the square root The square root of 36 is: √36=6 Final Answer Thus, the geometric mean of 4 and 9 is: Geometric Mean=6 Topper's Solved these Questions Explore 6 Videos Explore 20 Videos Explore 19 Videos Explore 29 Videos Explore 21 Videos Similar Questions Geometric mean & a.g.p The geometric mean of 9 and 81 is Knowledge Check Geometric mean of 3, 9 and 27, is The Geometric mean of 2 and 8 is- The geometric mean of 9 and 81 is If x is the geometric mean of 16 and 9, find x. The geometric mean of x and y is 6 and the geometric mean of x, y and z is also 6. Then the value of z is A, B and C are distinct positive integers, less than or equal to 10. The arithmetic mean of A and B is 9. The geometric mean of A and C is 6√2. Find the harmonic mean of B and C. Geometric mean between 19 and 729 is Geometrical meaning of Zeros CBSE COMPLEMENTARY MATERIAL-SEQUENCES AND SERIES -SECTION-A(Fill in the blanks) If 7th and 13th terms of an A.P. be 34 and 64 respectively, then its... Geometric mean of 4 and 9 is . If the sum of P terms of an A.P. is q and the sum of q terms ... Sum of infinity of sequence 5, (5)/(3),(5)/(9),……... If a, b, c are in A.P. and x, y, z are in G.P., then prove that : x^... The two geometric means between numbers 1 and 64 are Exams Free Textbook Solutions Free Ncert Solutions English Medium Free Ncert Solutions Hindi Medium Boards Resources Doubtnut is No.1 Study App and Learning App with Instant Video Solutions for NCERT Class 6, Class 7, Class 8, Class 9, Class 10, Class 11 and Class 12, IIT JEE prep, NEET preparation and CBSE, UP Board, Bihar Board, Rajasthan Board, MP Board, Telangana Board etc NCERT solutions for CBSE and other state boards is a key requirement for students. Doubtnut helps with homework, doubts and solutions to all the questions. It has helped students get under AIR 100 in NEET & IIT JEE. Get PDF and video solutions of IIT-JEE Mains & Advanced previous year papers, NEET previous year papers, NCERT books for classes 6 to 12, CBSE, Pathfinder Publications, RD Sharma, RS Aggarwal, Manohar Ray, Cengage books for boards and competitive exams. Doubtnut is the perfect NEET and IIT JEE preparation App. Get solutions for NEET and IIT JEE previous years papers, along with chapter wise NEET MCQ solutions. Get all the study material in Hindi medium and English medium for IIT JEE and NEET preparation Contact Us
189336
https://stackoverflow.com/questions/63466796/using-the-hypergeometric-test-in-python
statistics - Using the hypergeometric test in python - Stack Overflow Join Stack Overflow By clicking “Sign up”, you agree to our terms of service and acknowledge you have read our privacy policy. Sign up with Google Sign up with GitHub OR Email Password Sign up Already have an account? Log in Skip to main content Stack Overflow 1. About 2. Products 3. For Teams Stack Overflow for Teams Where developers & technologists share private knowledge with coworkers Advertising Reach devs & technologists worldwide about your product, service or employer brand Knowledge Solutions Data licensing offering for businesses to build and improve AI tools and models Labs The future of collective knowledge sharing About the companyVisit the blog Loading… current community Stack Overflow helpchat Meta Stack Overflow your communities Sign up or log in to customize your list. more stack exchange communities company blog Log in Sign up Let's set up your homepage Select a few topics you're interested in: python javascript c#reactjs java android html flutter c++node.js typescript css r php angular next.js spring-boot machine-learning sql excel ios azure docker Or search from our full list: javascript python java c# php android html jquery c++ css ios sql mysql r reactjs node.js arrays c asp.net json python-3.x .net ruby-on-rails sql-server swift django angular objective-c excel pandas angularjs regex typescript ruby linux ajax iphone vba xml laravel spring asp.net-mvc database wordpress string flutter postgresql mongodb wpf windows xcode amazon-web-services bash git oracle-database spring-boot dataframe azure firebase list multithreading docker vb.net react-native eclipse algorithm powershell macos visual-studio numpy image forms scala function vue.js performance twitter-bootstrap selenium winforms kotlin loops express dart hibernate sqlite matlab python-2.7 shell rest apache entity-framework android-studio csv maven linq qt dictionary unit-testing asp.net-core facebook apache-spark tensorflow file swing class unity-game-engine sorting date authentication go symfony t-sql opencv matplotlib .htaccess google-chrome for-loop datetime codeigniter perl http validation sockets google-maps object uitableview xaml oop visual-studio-code if-statement cordova ubuntu web-services email android-layout github spring-mvc elasticsearch kubernetes selenium-webdriver ms-access ggplot2 user-interface parsing pointers c++11 google-sheets security machine-learning google-apps-script ruby-on-rails-3 templates flask nginx variables exception sql-server-2008 gradle debugging tkinter delphi listview jpa asynchronous web-scraping haskell pdf jsp ssl amazon-s3 google-cloud-platform jenkins testing xamarin wcf batch-file generics npm ionic-framework network-programming unix recursion google-app-engine mongoose visual-studio-2010 .net-core android-fragments assembly animation math svg session intellij-idea hadoop rust next.js curl join winapi django-models laravel-5 url heroku http-redirect tomcat google-cloud-firestore inheritance webpack image-processing gcc keras swiftui asp.net-mvc-4 logging dom matrix pyspark actionscript-3 button post optimization firebase-realtime-database web jquery-ui cocoa xpath iis d3.js javafx firefox xslt internet-explorer caching select asp.net-mvc-3 opengl events asp.net-web-api plot dplyr encryption magento stored-procedures search amazon-ec2 ruby-on-rails-4 memory canvas audio multidimensional-array random jsf vector redux cookies input facebook-graph-api flash indexing xamarin.forms arraylist ipad cocoa-touch data-structures video azure-devops model-view-controller apache-kafka serialization jdbc woocommerce razor routes awk servlets mod-rewrite excel-formula beautifulsoup filter docker-compose iframe aws-lambda design-patterns text visual-c++ django-rest-framework cakephp mobile android-intent struct react-hooks methods groovy mvvm ssh lambda checkbox time ecmascript-6 grails google-chrome-extension installation cmake sharepoint shiny spring-security jakarta-ee plsql android-recyclerview core-data types sed meteor android-activity activerecord bootstrap-4 websocket graph replace scikit-learn group-by vim file-upload junit boost memory-management sass import async-await deep-learning error-handling eloquent dynamic soap dependency-injection silverlight layout apache-spark-sql charts deployment browser gridview svn while-loop google-bigquery vuejs2 dll highcharts ffmpeg view foreach makefile plugins redis c#-4.0 reporting-services jupyter-notebook unicode merge reflection https server google-maps-api-3 twitter oauth-2.0 extjs terminal axios pip split cmd pytorch encoding django-views collections database-design hash netbeans automation data-binding ember.js build tcp pdo sqlalchemy apache-flex mysqli entity-framework-core concurrency command-line spring-data-jpa printing react-redux java-8 lua html-table ansible jestjs neo4j service parameters enums material-ui flexbox module promise visual-studio-2012 outlook firebase-authentication web-applications webview uwp jquery-mobile utf-8 datatable python-requests parallel-processing colors drop-down-menu scipy scroll tfs hive count syntax ms-word twitter-bootstrap-3 ssis fonts rxjs constructor google-analytics file-io three.js paypal powerbi graphql cassandra discord graphics compiler-errors gwt socket.io react-router solr backbone.js memory-leaks url-rewriting datatables nlp oauth terraform datagridview drupal oracle11g zend-framework knockout.js triggers neural-network interface django-forms angular-material casting jmeter google-api linked-list path timer django-templates arduino proxy orm directory windows-phone-7 parse-platform visual-studio-2015 cron conditional-statements push-notification functional-programming primefaces pagination model jar xamarin.android hyperlink uiview visual-studio-2013 vbscript google-cloud-functions gitlab azure-active-directory jwt download swift3 sql-server-2005 configuration process rspec pygame properties combobox callback windows-phone-8 linux-kernel safari scrapy permissions emacs scripting raspberry-pi clojure x86 scope io expo azure-functions compilation responsive-design mongodb-query nhibernate angularjs-directive request bluetooth reference binding dns architecture 3d playframework pyqt version-control discord.js doctrine-orm package f# rubygems get sql-server-2012 autocomplete tree openssl datepicker kendo-ui jackson yii controller grep nested xamarin.ios static null statistics transactions active-directory datagrid dockerfile uiviewcontroller webforms discord.py phpmyadmin sas computer-vision notifications duplicates mocking youtube pycharm nullpointerexception yaml menu blazor sum plotly bitmap asp.net-mvc-5 visual-studio-2008 yii2 electron floating-point css-selectors stl jsf-2 android-listview time-series cryptography ant hashmap character-encoding stream msbuild asp.net-core-mvc sdk google-drive-api jboss selenium-chromedriver joomla devise cors navigation anaconda cuda background frontend binary multiprocessing pyqt5 camera iterator linq-to-sql mariadb onclick android-jetpack-compose ios7 microsoft-graph-api rabbitmq android-asynctask tabs laravel-4 environment-variables amazon-dynamodb insert uicollectionview linker xsd coldfusion console continuous-integration upload textview ftp opengl-es macros operating-system mockito localization formatting xml-parsing vuejs3 json.net type-conversion data.table kivy timestamp integer calendar segmentation-fault android-ndk prolog drag-and-drop char crash jasmine dependencies automated-tests geometry azure-pipelines android-gradle-plugin itext fortran sprite-kit header mfc firebase-cloud-messaging attributes nosql format nuxt.js odoo db2 jquery-plugins event-handling jenkins-pipeline nestjs leaflet julia annotations flutter-layout keyboard postman textbox arm visual-studio-2017 gulp stripe-payments libgdx synchronization timezone uikit azure-web-app-service dom-events xampp wso2 crystal-reports namespaces swagger android-emulator aggregation-framework uiscrollview jvm google-sheets-formula sequelize.js com chart.js snowflake-cloud-data-platform subprocess geolocation webdriver html5-canvas centos garbage-collection dialog sql-update widget numbers concatenation qml tuples set java-stream smtp mapreduce ionic2 windows-10 rotation android-edittext modal-dialog spring-data nuget doctrine radio-button http-headers grid sonarqube lucene xmlhttprequest listbox switch-statement initialization internationalization components apache-camel boolean google-play serial-port gdb ios5 ldap youtube-api return eclipse-plugin pivot latex frameworks tags containers github-actions c++17 subquery dataset asp-classic foreign-keys label embedded uinavigationcontroller copy delegates struts2 google-cloud-storage migration protractor base64 queue find uibutton sql-server-2008-r2 arguments composer-php append jaxb zip stack tailwind-css cucumber autolayout ide entity-framework-6 iteration popup r-markdown windows-7 airflow vb6 g++ ssl-certificate hover clang jqgrid range gmail Next You’ll be prompted to create an account to view your personalized homepage. Home Questions AI Assist Labs Tags Challenges Chat Articles Users Jobs Companies Collectives Communities for your favorite technologies. Explore all Collectives Teams Ask questions, find answers and collaborate at work with Stack Overflow for Teams. Try Teams for freeExplore Teams 3. Teams 4. Ask questions, find answers and collaborate at work with Stack Overflow for Teams. Explore Teams Collectives™ on Stack Overflow Find centralized, trusted content and collaborate around the technologies you use most. Learn more about Collectives Teams Q&A for work Connect and share knowledge within a single location that is structured and easy to search. Learn more about Teams Hang on, you can't upvote just yet. You'll need to complete a few actions and gain 15 reputation points before being able to upvote. Upvoting indicates when questions and answers are useful. What's reputation and how do I get it? Instead, you can save this post to reference later. Save this post for later Not now Thanks for your vote! You now have 5 free votes weekly. Free votes count toward the total vote score does not give reputation to the author Continue to help good content that is interesting, well-researched, and useful, rise to the top! To gain full voting privileges, earn reputation. Got it!Go to help center to learn more Using the hypergeometric test in python Ask Question Asked 5 years, 1 month ago Modified4 years, 7 months ago Viewed 4k times This question shows research effort; it is useful and clear 1 Save this question. Show activity on this post. I have two gene lists and I calculate the intersection between them. I need to calculate the p value for the hypothesize that - the intersection of these lists occured by chance. I tried to implement that using fisher's exact test (scipy function). Notice that I need a one sided p value. My code: ```python def main(gene_path1, gene_path2, pop_size): genes1 = pd.read_csv(gene_path1, sep='\n', header=None) genes2 = pd.read_csv(gene_path2, sep='\n', header=None) intersection = pd.merge(genes1, genes2, how='inner').drop_duplicates() len_genes1 = genes1.count() len_genes2 = genes2.count() len_intersection = intersection.count() oddsratio, pvalue = stats.fisher_exact(, alternative='less') print(f'Genes1 len: {len_genes1}, Genes2 len: {len_genes2}, Intersection: {len_intersection}, pvalue: {pvalue}') ``` For the sake of simplicity, I used a list of numbers (not genes). Since it's too long I won't copy the entire file but imagine two files with lots of random numbers seperated by a newline. For example: python 1 2 3 246 51451 ... The question is - how can I be sure that I specified the arguments for the fisher's exact function correctly? is it right according to the hypothesize i am trying to check? I suspect that I have done it incorrectly but i'm not sure why. might be a hint for what's wrong - I understand that the population size should be relevant but I am not sure where to use it and how. Any leads or insights would be appreciated. UPDATE: I tried to implement it in a different way. ```python from scipy.stats import hypergeom as hg import pandas as pd def main(gene_path1, gene_path2, pop_size): genes1 = pd.read_csv(gene_path1, sep='\n', header=None) genes2 = pd.read_csv(gene_path2, sep='\n', header=None) intersection = pd.merge(genes1, genes2, how='inner').drop_duplicates() len_genes1 = genes1.count() len_genes2 = genes2.count() len_intersection = intersection.count() pvalue = hg.cdf(int(len_intersection)-1, int(pop_size), int(len_genes1), int(len_genes2)) print(f'Genes1 len: {len_genes1}, Genes2 len: {len_genes2}, Intersection: {len_intersection}, p value: {pvalue}) ``` I am just wondering if I got the arguments in the right place, how could I validate that? python statistics p-value Share Share a link to this question Copy linkCC BY-SA 4.0 Improve this question Follow Follow this question to receive notifications edited Aug 18, 2020 at 14:16 Eliran TurgemanEliran Turgeman asked Aug 18, 2020 at 10:32 Eliran TurgemanEliran Turgeman 1,686 3 3 gold badges 19 19 silver badges 39 39 bronze badges Add a comment| 2 Answers 2 Sorted by: Reset to default This answer is useful 5 Save this answer. Show activity on this post. This should help too: ```python g = 75 ## Number of submitted genes k = 59 ## Size of the selection, i.e. submitted genes with at least one annotation in GO biological processes m = 611 ## Number of "marked" elements, i.e. genes associated to this biological process N = 13588 ## Total number of genes with some annotation in GOTERM_BP_FAT. n = N - m ## Number of "non-marked" elements, i.e. genes not associated to this biological process x = 19 ## Number of "marked" elements in the selection, i.e. genes of the group of interest that are associated to this biological process Python stats.hypergeom(M=N, n=m, N=k).sf(x-1) 4.989682834451419e-12 R phyper(q=x -1, m=m, n=n, k=k, lower.tail=FALSE) 4.989683e-12 ``` Share Share a link to this answer Copy linkCC BY-SA 4.0 Improve this answer Follow Follow this answer to receive notifications answered Feb 22, 2021 at 23:21 O.rkaO.rka 30.9k 78 78 gold badges 213 213 silver badges 336 336 bronze badges Comments Add a comment This answer is useful 0 Save this answer. Show activity on this post. I wonder if you still have the same issue or not. However, I found this link pretty useful to make sure of your hypergeometric test results. Regarding your calculations, your results has to be equal to Cumulative Probability: P(X < int(len_intersection)) Share Share a link to this answer Copy linkCC BY-SA 4.0 Improve this answer Follow Follow this answer to receive notifications answered Jan 11, 2021 at 14:10 Parisa DajParisa Daj 1 Comments Add a comment Your Answer Thanks for contributing an answer to Stack Overflow! Please be sure to answer the question. Provide details and share your research! But avoid … Asking for help, clarification, or responding to other answers. Making statements based on opinion; back them up with references or personal experience. To learn more, see our tips on writing great answers. Draft saved Draft discarded Sign up or log in Sign up using Google Sign up using Email and Password Submit Post as a guest Name Email Required, but never shown Post Your Answer Discard By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy. Start asking to get answers Find the answer to your question by asking. Ask question Explore related questions python statistics p-value See similar questions with these tags. The Overflow Blog The history and future of software development (part 1) Getting Backstage in front of a shifting dev experience Featured on Meta Spevacus has joined us as a Community Manager Introducing a new proactive anti-spam measure New and improved coding challenges New comment UI experiment graduation Policy: Generative AI (e.g., ChatGPT) is banned Report this ad Report this ad Related 11Can some explain this strange behavior of the hypergeometric distribution in scipy? 27Hypergeometric test (phyper) 14scipy p-value returns 0.0 0Calculate p-value in python using R 5How can I calculate a p-value for a hypergeometric distribution in Go? 1scipy.stats.hypergeom: Nan 3Bonferroni correction of p-values from hypergeometric analysis 1Discrepancy using Python for hypergeometric distribution 0Is there a hypergeometric function in python? 1Multivariate Hypergeometric Distribution in Python Hot Network Questions Numbers Interpreted in Smallest Valid Base Clinical-tone story about Earth making people violent The geologic realities of a massive well out at Sea Determine which are P-cores/E-cores (Intel CPU) Is it possible that heinous sins result in a hellish life as a person, NOT always animal birth? Is existence always locational? Interpret G-code Any knowledge on biodegradable lubes, greases and degreasers and how they perform long term? Is there a way to defend from Spot kick? If Israel is explicitly called God’s firstborn, how should Christians understand the place of the Church? How different is Roman Latin? What meal can come next? Storing a session token in localstorage Is encrypting the login keyring necessary if you have full disk encryption? Childhood book with a girl obsessessed with homonyms who adopts a stray dog but gives it back to its owners Transforming wavefunction from energy basis to annihilation operator basis for quantum harmonic oscillator What NBA rule caused officials to reset the game clock to 0.3 seconds when a spectator caught the ball with 0.1 seconds left? How to locate a leak in an irrigation system? Why are LDS temple garments secret? Where is the first repetition in the cumulative hierarchy up to elementary equivalence? What happens if you miss cruise ship deadline at private island? Calculating the node voltage в ответе meaning in context Does the mind blank spell prevent someone from creating a simulacrum of a creature using wish? Question feed Subscribe to RSS Question feed To subscribe to this RSS feed, copy and paste this URL into your RSS reader. lang-py Why are you flagging this comment? Probable spam. This comment promotes a product, service or website while failing to disclose the author's affiliation. Unfriendly or contains harassment/bigotry/abuse. This comment is unkind, insulting or attacks another person or group. Learn more in our Code of Conduct. Not needed. This comment is not relevant to the post. Enter at least 6 characters Something else. A problem not listed above. Try to be as specific as possible. Enter at least 6 characters Flag comment Cancel You have 0 flags left today Stack Overflow Questions Help Chat Products Teams Advertising Talent Company About Press Work Here Legal Privacy Policy Terms of Service Contact Us Your Privacy Choices Cookie Policy Stack Exchange Network Technology Culture & recreation Life & arts Science Professional Business API Data Blog Facebook Twitter LinkedIn Instagram Site design / logo © 2025 Stack Exchange Inc; user contributions licensed under CC BY-SA. rev 2025.9.26.34547 By clicking “Accept all cookies”, you agree Stack Exchange can store cookies on your device and disclose information in accordance with our Cookie Policy. Accept all cookies Necessary cookies only Customize settings
189337
https://math.stackexchange.com/questions/4941504/sum-of-the-vectors-from-centre-o-to-the-polygon-vertices
geometry - Sum of the vectors from centre $O$ to the polygon vertices - Mathematics Stack Exchange Join Mathematics By clicking “Sign up”, you agree to our terms of service and acknowledge you have read our privacy policy. Sign up with Google OR Email Password Sign up Already have an account? Log in Skip to main content Stack Exchange Network Stack Exchange network consists of 183 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. Visit Stack Exchange Loading… Tour Start here for a quick overview of the site Help Center Detailed answers to any questions you might have Meta Discuss the workings and policies of this site About Us Learn more about Stack Overflow the company, and our products current community Mathematics helpchat Mathematics Meta your communities Sign up or log in to customize your list. more stack exchange communities company blog Log in Sign up Home Questions Unanswered AI Assist Labs Tags Chat Users Teams Ask questions, find answers and collaborate at work with Stack Overflow for Teams. Try Teams for freeExplore Teams 3. Teams 4. Ask questions, find answers and collaborate at work with Stack Overflow for Teams. Explore Teams Teams Q&A for work Connect and share knowledge within a single location that is structured and easy to search. Learn more about Teams Hang on, you can't upvote just yet. You'll need to complete a few actions and gain 15 reputation points before being able to upvote. Upvoting indicates when questions and answers are useful. What's reputation and how do I get it? Instead, you can save this post to reference later. Save this post for later Not now Thanks for your vote! You now have 5 free votes weekly. Free votes count toward the total vote score does not give reputation to the author Continue to help good content that is interesting, well-researched, and useful, rise to the top! To gain full voting privileges, earn reputation. Got it!Go to help center to learn more Sum of the vectors from centre O O to the polygon vertices Ask Question Asked 1 year, 2 months ago Modified1 year, 2 months ago Viewed 360 times This question shows research effort; it is useful and clear 1 Save this question. Show activity on this post. I'm attempting to calculate the sum of the vectors from the center of a regular polygon to each of the vertices. I have already solve it in a complex analysis manner: To represent the vertices of a regular polygon using polar coordinates, we define the vertices as follows: A i=(cos(2 π⋅i−1 n),sin(2 π⋅i−1 n)),i∈{1,⋯,n}A i=(cos⁡(2 π⋅i−1 n),sin⁡(2 π⋅i−1 n)),i∈{1,⋯,n} To calculate the sum ∑A i→∑A i→,we need to evaluate the sums ∑cos(2 π⋅i−1 n)∑cos⁡(2 π⋅i−1 n) and ∑sin(2 π⋅i−1 n)∑sin⁡(2 π⋅i−1 n). These can be expressed using complex exponentials as follows: ∑j=1 n cos(2 π⋅j−1 n)∑j=1 n sin(2 π⋅j−1 n)=R e(∑j=1 n exp(2 π i⋅j−1 n))=R e(1−[exp(2 π i n)]n 1−exp(2 π i n))=0=I m(∑j=1 n exp(2 π i⋅j−1 n))=I m(1−[exp(2 π i n)]n 1−exp(2 π i n))=0∑j=1 n cos⁡(2 π⋅j−1 n)=R e(∑j=1 n exp⁡(2 π i⋅j−1 n))=R e(1−[exp⁡(2 π i n)]n 1−exp⁡(2 π i n))=0∑j=1 n sin⁡(2 π⋅j−1 n)=I m(∑j=1 n exp⁡(2 π i⋅j−1 n))=I m(1−[exp⁡(2 π i n)]n 1−exp⁡(2 π i n))=0 Therefore, the sum of the vectors to each of the vertices of the polygon is 0⃗0→. However, I'm just wondering that is there any geometric method to solve this problem? (Maybe the symmetric property of regular polygons?) geometry algebra-precalculus vectors euclidean-geometry Share Share a link to this question Copy linkCC BY-SA 4.0 Cite Follow Follow this question to receive notifications edited Jul 5, 2024 at 8:04 user21820 61.1k 9 9 gold badges 109 109 silver badges 282 282 bronze badges asked Jul 4, 2024 at 6:38 Hank WangHank Wang 99 4 4 bronze badges Add a comment| 1 Answer 1 Sorted by: Reset to default This answer is useful 10 Save this answer. Show activity on this post. Whatever the sum is, it must be invariant under rotation by 2 π n 2 π n radians around the center. There is only one such point... Share Share a link to this answer Copy linkCC BY-SA 4.0 Cite Follow Follow this answer to receive notifications answered Jul 4, 2024 at 6:43 Qiaochu YuanQiaochu Yuan 475k 55 55 gold badges 1.1k 1.1k silver badges 1.5k 1.5k bronze badges 1 The beautiful power of symmetry arguments, it literally brings me joy.Mathematician 42 –Mathematician 42 2024-07-04 06:50:49 +00:00 Commented Jul 4, 2024 at 6:50 Add a comment| You must log in to answer this question. Start asking to get answers Find the answer to your question by asking. Ask question Explore related questions geometry algebra-precalculus vectors euclidean-geometry See similar questions with these tags. Featured on Meta Introducing a new proactive anti-spam measure Spevacus has joined us as a Community Manager stackoverflow.ai - rebuilt for attribution Community Asks Sprint Announcement - September 2025 Report this ad Related 1Rotate a Regular Convex Polygon so vertices are maximum distance from both X and Y axis 2Sum of the vectors from one vertex of a regular polygon to each remaining vertex 0Can magnitude of sum of three unit vectors be complex? 3How can I find the cross product of an inner sum and difference between two vectors? 0Vertices of a regular polygon in the plane with irrational coordinates 2Prove that difference in area of circumcircle and polygon is greater than the difference in area of polygon and incircle. 1How to prove that the dot product is distributive (non-coplanar vectors)? 1Sum of two non-adjacent vectors in a quadrilateral 1Centroid of a set of vectors comprising polyhedral vertices Hot Network Questions Vanishing ext groups of sheaves with disjoint support My dissertation is wrong, but I already defended. How to remedy? Gluteus medius inactivity while riding Discussing strategy reduces winning chances of everyone! How different is Roman Latin? Does "An Annotated Asimov Biography" exist? How to start explorer with C: drive selected and shown in folder list? Do sum of natural numbers and sum of their squares represent uniquely the summands? How to solve generalization of inequality problem using substitution? ConTeXt: Unnecessary space in \setupheadertext How to locate a leak in an irrigation system? Vampires defend Earth from Aliens Fundamentally Speaking, is Western Mindfulness a Zazen or Insight Meditation Based Practice? в ответе meaning in context ICC in Hague not prosecuting an individual brought before them in a questionable manner? Stress in "agentic" Why multiply energies when calculating the formation energy of butadiene's π-electron system? Does the mind blank spell prevent someone from creating a simulacrum of a creature using wish? Origin of Australian slang exclamation "struth" meaning greatly surprised Is direct sum of finite spectra cancellative? Numbers Interpreted in Smallest Valid Base Does the Mishna or Gemara ever explicitly mention the second day of Shavuot? What happens if you miss cruise ship deadline at private island? What is the feature between the Attendant Call and Ground Call push buttons on a B737 overhead panel? more hot questions Question feed Subscribe to RSS Question feed To subscribe to this RSS feed, copy and paste this URL into your RSS reader. Why are you flagging this comment? It contains harassment, bigotry or abuse. This comment attacks a person or group. Learn more in our Code of Conduct. It's unfriendly or unkind. This comment is rude or condescending. Learn more in our Code of Conduct. Not needed. This comment is not relevant to the post. Enter at least 6 characters Something else. A problem not listed above. Try to be as specific as possible. Enter at least 6 characters Flag comment Cancel You have 0 flags left today Mathematics Tour Help Chat Contact Feedback Company Stack Overflow Teams Advertising Talent About Press Legal Privacy Policy Terms of Service Your Privacy Choices Cookie Policy Stack Exchange Network Technology Culture & recreation Life & arts Science Professional Business API Data Blog Facebook Twitter LinkedIn Instagram Site design / logo © 2025 Stack Exchange Inc; user contributions licensed under CC BY-SA. rev 2025.9.29.34589 By clicking “Accept all cookies”, you agree Stack Exchange can store cookies on your device and disclose information in accordance with our Cookie Policy. Accept all cookies Necessary cookies only Customize settings Cookie Consent Preference Center When you visit any of our websites, it may store or retrieve information on your browser, mostly in the form of cookies. This information might be about you, your preferences, or your device and is mostly used to make the site work as you expect it to. The information does not usually directly identify you, but it can give you a more personalized experience. Because we respect your right to privacy, you can choose not to allow some types of cookies. Click on the different category headings to find out more and manage your preferences. Please note, blocking some types of cookies may impact your experience of the site and the services we are able to offer. Cookie Policy Accept all cookies Manage Consent Preferences Strictly Necessary Cookies Always Active These cookies are necessary for the website to function and cannot be switched off in our systems. They are usually only set in response to actions made by you which amount to a request for services, such as setting your privacy preferences, logging in or filling in forms. You can set your browser to block or alert you about these cookies, but some parts of the site will not then work. These cookies do not store any personally identifiable information. Cookies Details‎ Performance Cookies [x] Performance Cookies These cookies allow us to count visits and traffic sources so we can measure and improve the performance of our site. They help us to know which pages are the most and least popular and see how visitors move around the site. All information these cookies collect is aggregated and therefore anonymous. If you do not allow these cookies we will not know when you have visited our site, and will not be able to monitor its performance. Cookies Details‎ Functional Cookies [x] Functional Cookies These cookies enable the website to provide enhanced functionality and personalisation. They may be set by us or by third party providers whose services we have added to our pages. If you do not allow these cookies then some or all of these services may not function properly. Cookies Details‎ Targeting Cookies [x] Targeting Cookies These cookies are used to make advertising messages more relevant to you and may be set through our site by us or by our advertising partners. They may be used to build a profile of your interests and show you relevant advertising on our site or on other sites. They do not store directly personal information, but are based on uniquely identifying your browser and internet device. Cookies Details‎ Cookie List Clear [x] checkbox label label Apply Cancel Consent Leg.Interest [x] checkbox label label [x] checkbox label label [x] checkbox label label Necessary cookies only Confirm my choices
189338
https://www.mathwizurd.com/chemistry/2015/5/10/quantum-numbers-and-electron-orbitals
Published Time: 2015-05-10T21:07:57-0400 Quantum Numbers and Electron Orbitals — Mathwizurd Home About Biology Blog Calculus History Physics Linear Algebra All Contact Mathwizurd.com is created by David Witten, a mathematics and computer science student at Stanford University. For more information, see the "About" page. Mathwizurd May 10 May 10 Quantum Numbers and Electron Orbitals David Witten [x] Assigning Quantum Numbers The following relationships involving the three quantum numbers come from the solution of the Schrödinger wave equation for the Hydrogen atom. In this solution, the values of the quantum numbers are fixed in the order listed. The first number to be fixed is the principal quantum number, n, which may only have a positive, nonzero integral value. n=1, 2, 3, 4, . . . 2. Second is the orbital angular momentum quantum number, L (instead of cursive l), which may be zero ro a positive integer, but not larger than n - 1 (n being the principal quantum number) L = 0, 1, 2, 3, 4, . . . , n - 1 3. Third is the magnetic quantum number, Ml, which may be a negative or positive integer, including zero, and ranging from -L to +L (where L is the orbital angular momentum quantum number). Ml = -L, -L + 1, . . . , -2, -1, 0, 1, 2, . . . ,L - 1, L Principal Shells and Subshells All orbitals with the same value of n are in the sameprincipal electronic shellorprincipal level. All orbitals with the same N and L values are in the samesubshell. Principal electronic shells are numbered according to the value of n. The first principal shell consists of orbitals with n = 1; the second principal shell of orbitals with n = 2; and so on. The value of n relates to the energy and most probable distance of an electron from the nucleus. The higher the value of n, the greater the electron energy and the farther the electron is from the nucleus. Therefore, the principal quantum number has a physical significance. It defines how far away the electron is. The quantum number L determines the angular distribution, or shape, of an orbital and Ml determines the orientation of the orbital. The number of subshells in a principal electronic shell is the same as the number of allowed values of L. In the first principal shell (n = 1), there is only one possible L value: 0. Therefore, there's only one subshell. So, to generalize:the n'th principal electronic shell has n subshells. The name given to a subshell depends on the value of the L quantum number. First four subshells: s subshell: L = 0, p subshell: L = 1 d subshell: L = 2, f subshell: L = 3 The number of orbitals are equal to the possible values of Ml, so -L, -L+ 1 ... L-1 , L. That means that there are 2L + 1 orbitals in a subshell. First four orbitals: note: the lettering is the same, so the L values are equal s orbital: 1 s orbital in an s subshell (2L + 1 = 1) p orbital: 3 p orbitals in a p subshell (2L + 1 = 3) d orbital: 5 d orbitals in a d subshell (2L + 1 = 5) f orbital: 7 f orbitals in a f subshell (2L + 1 = 7) To designate the particular principal shell, we sue a combination of a number and a letter. For example, the symbol 2p is used to mean the 2nd principal electronic shell and the p subshell and any of the three p orbitals. David Witten Facebook 0TwitterLinkedIn 0RedditTumblr4 Likes David Witten David Witten Show 1 comment Comments (1) Newest First Preview Post Comment… Shantanu Mitra.3 years ago Pending Awaiting Moderation· 0 Likes Very timely, urgent, lucid notes in making inroads to elementary atomic structure, and quantum chemistry. I hope more such discussions in next Preview Post Reply David Witten Cras mattis consectetur purus sit amet fermentum. Integer posuere erat a ante venenatis dapibus posuere velit aliquet. Aenean eu leo quam. Pellentesque ornare sem lacinia quam venenatis vestibulum. May 13 Chapter 16 Notes: Acids and Bases ----------------------------------------May 10 The Bohr Atom -------------------- Made by David Witten Powered by Squarespace
189339
https://www.gauthmath.com/solution/1812631387628677/100-89-78-
Solved: 100, 89, 78, ... [Math] Drag Image or Click Here to upload Command+to paste Upgrade Sign in Homework Homework Assignment Solver Assignment Calculator Calculator Resources Resources Blog Blog App App Gauth Unlimited answers Gauth AI Pro Start Free Trial Homework Helper Study Resources Math Sequences Questions Question 100, 89, 78, ... Gauth AI Solution 100%(2 rated) Answer 67 Explanation Calculate the difference between consecutive numbers: 100 - 89 = 11, 89 - 78 = 11. The pattern is a decrease of 11 in each step. Subtract 11 from the last number to find the next number in the sequence: 78 - 11 = 67. Helpful Not Helpful Explain Simplify this solution Gauth AI Pro Back-to-School 3 Day Free Trial Limited offer! Enjoy unlimited answers for free. Join Gauth PLUS for $0 Previous questionNext question Related 67, 78, 89, 100, ... 100% (4 rated) 78+?=100 _ d 89-?=52 ? =_ 100% (5 rated) Gauth it, Ace it! contact@gauthmath.com Company About UsExpertsWriting Examples Legal Honor CodePrivacy PolicyTerms of Service Download App
189340
https://dl.icdst.org/pdfs/files/25a6d982ee80e1db7a4ebf7eeca4e0ec.pdf
The Text Mining Handbook Text mining is a new and exciting area of computer science research that tries to solve the crisis of information overload by combining techniques from data mining, machine learning, natural language processing, information retrieval, and knowledge management. Similarly, link detection – a rapidly evolving approach to the analysis of text that shares and builds on many of the key elements of text mining – also provides new tools for people to better leverage their burgeoning textual data resources. Link detection relies on a process of building up networks of interconnected objects through various relationships in order to discover patterns and trends. The main tasks of link detection are to extract, discover, and link together sparse evidence from vast amounts of data sources, to represent and evaluate the significance of the related evidence, and to learn patterns to guide the extraction, discovery, and linkage of entities. The Text Mining Handbook presents a comprehensive discussion of the state of the art in text mining and link detection. In addition to providing an in-depth examination of core text mining and link detection algorithms and operations, the work examines advanced preprocessing techniques, knowledge representation considerations, and visualization approaches. Finally, the book explores current real-world, mission-critical applications of text mining and link detection in such varied fields as corporate finance business intelligence, genomics research, and counterterrorism activities. Dr. Ronen Feldman is a Senior Lecturer in the Mathematics and Computer Science Department of Bar-Ilan University and Director of the Data and Text Mining Laboratory. Dr. Feldman is cofounder, Chief Scientist, and President of ClearForest, Ltd., a leader in developing next-generation text mining applications for corporate and government clients. He also recently served as an Adjunct Professor at New York University’s Stern School of Business. A pioneer in the areas of machine learning, data mining, and unstruc-tured data management, he has authored or coauthored more than 70 published articles and conference papers in these areas. James Sanger is a venture capitalist, applied technologist, and recognized industry expert in the areas of commercial data solutions, Internet applications, and IT security products. He is a partner at ABS Ventures, an independent venture firm founded in 1982 and originally associated with technology banking leader Alex. Brown and Sons. Immediately before joining ABS Ventures, Mr. Sanger was a Managing Director in the New York offices of DB Capital Venture Partners, the global venture capital arm of Deutsche Bank. Mr. Sanger has been a board member of several thought-leading technology companies, including Inxight Software, Gomez Inc., and ClearForest, Inc.; he has also served as an official observer to the boards of AlphaBlox (acquired by IBM in 2004), Intralinks, and Imagine Software and as a member of the Technical Advisory Board of Qualys, Inc. THE TEXT MINING HANDBOOK Advanced Approaches in Analyzing Unstructured Data Ronen Feldman Bar-Ilan University, Israel James Sanger ABS Ventures, Waltham, Massachusetts CAMBRIDGE UNIVERSITY PRESS Cambridge, New York, Melbourne, Madrid, Cape Town, Singapore, São Paulo Cambridge University Press The Edinburgh Building, Cambridge CB2 8RU, UK First published in print format ISBN-13 978-0-521-83657-9 ISBN-13 978-0-511-33507-5 © Ronen Feldman and James Sanger 2007 2006 Information on this title: www.cambridge.org/9780521836579 This publication is in copyright. Subject to statutory exception and to the provision of relevant collective licensing agreements, no reproduction of any part may take place without the written permission of Cambridge University Press. ISBN-10 0-511-33507-5 ISBN-10 0-521-83657-3 Cambridge University Press has no responsibility for the persistence or accuracy of urls for external or third-party internet websites referred to in this publication, and does not guarantee that any content on such websites is, or will remain, accurate or appropriate. Published in the United States of America by Cambridge University Press, New York www.cambridge.org hardback eBook (NetLibrary) eBook (NetLibrary) hardback In loving memory of my father, Issac Feldman Contents Preface page x I. Introduction to Text Mining 1 I.1 Defining Text Mining 1 I.2 General Architecture of Text Mining Systems 13 II. Core Text Mining Operations 19 II.1 Core Text Mining Operations 19 II.2 Using Background Knowledge for Text Mining 41 II.3 Text Mining Query Languages 51 III. Text Mining Preprocessing Techniques 57 III.1 Task-Oriented Approaches 58 III.2 Further Reading 62 IV. Categorization 64 IV.1 Applications of Text Categorization 65 IV.2 Definition of the Problem 66 IV.3 Document Representation 68 IV.4 Knowledge Engineering Approach to TC 70 IV.5 Machine Learning Approach to TC 70 IV.6 Using Unlabeled Data to Improve Classification 78 IV.7 Evaluation of Text Classifiers 79 IV.8 Citations and Notes 80 V. Clustering 82 V.1 Clustering Tasks in Text Analysis 82 V.2 The General Clustering Problem 84 V.3 Clustering Algorithms 85 V.4 Clustering of Textual Data 88 V.5 Citations and Notes 92 vii viii Contents VI. Information Extraction 94 VI.1 Introduction to Information Extraction 94 VI.2 Historical Evolution of IE: The Message Understanding Conferences and Tipster 96 VI.3 IE Examples 101 VI.4 Architecture of IE Systems 104 VI.5 Anaphora Resolution 109 VI.6 Inductive Algorithms for IE 119 VI.7 Structural IE 122 VI.8 Further Reading 129 VII. Probabilistic Models for Information Extraction 131 VII.1 Hidden Markov Models 131 VII.2 Stochastic Context-Free Grammars 137 VII.3 Maximal Entropy Modeling 138 VII.4 Maximal Entropy Markov Models 140 VII.5 Conditional Random Fields 142 VII.6 Further Reading 145 VIII. Preprocessing Applications Using Probabilistic and Hybrid Approaches 146 VIII.1 Applications of HMM to Textual Analysis 146 VIII.2 Using MEMM for Information Extraction 152 VIII.3 Applications of CRFs to Textual Analysis 153 VIII.4 TEG: Using SCFG Rules for Hybrid Statistical–Knowledge-Based IE 155 VIII.5 Bootstrapping 166 VIII.6 Further Reading 175 IX. Presentation-Layer Considerations for Browsing and Query Refinement 177 IX.1 Browsing 177 IX.2 Accessing Constraints and Simple Specification Filters at the Presentation Layer 185 IX.3 Accessing the Underlying Query Language 186 IX.4 Citations and Notes 187 X. Visualization Approaches 189 X.1 Introduction 189 X.2 Architectural Considerations 192 X.3 Common Visualization Approaches for Text Mining 194 X.4 Visualization Techniques in Link Analysis 225 X.5 Real-World Example: The Document Explorer System 235 XI. Link Analysis 244 XI.1 Preliminaries 244 Contents ix XI.2 Automatic Layout of Networks 246 XI.3 Paths and Cycles in Graphs 250 XI.4 Centrality 251 XI.5 Partitioning of Networks 259 XI.6 Pattern Matching in Networks 272 XI.7 Software Packages for Link Analysis 273 XI.8 Citations and Notes 274 XII. Text Mining Applications 275 XII.1 General Considerations 276 XII.2 Corporate Finance: Mining Industry Literature for Business Intelligence 281 XII.3 A “Horizontal” Text Mining Application: Patent Analysis Solution Leveraging a Commercial Text Analytics Platform 297 XII.4 Life Sciences Research: Mining Biological Pathway Information with GeneWays 309 Appendix A: DIAL: A Dedicated Information Extraction Language for Text Mining 317 A.1 What Is the DIAL Language? 317 A.2 Information Extraction in the DIAL Environment 318 A.3 Text Tokenization 320 A.4 Concept and Rule Structure 320 A.5 Pattern Matching 322 A.6 Pattern Elements 323 A.7 Rule Constraints 327 A.8 Concept Guards 328 A.9 Complete DIAL Examples 329 Bibliography 337 Index 391 Preface The information age has made it easy to store large amounts of data. The prolifera-tion of documents available on the Web, on corporate intranets, on news wires, and elsewhere is overwhelming. However, although the amount of data available to us is constantly increasing, our ability to absorb and process this information remains constant. Search engines only exacerbate the problem by making more and more documents available in a matter of a few key strokes. Text mining is a new and exciting research area that tries to solve the information overload problem by using techniques from data mining, machine learning, natural language processing (NLP), information retrieval (IR), and knowledge management. Text mining involves the preprocessing of document collections (text categorization, information extraction, term extraction), the storage of the intermediate represen-tations, the techniques to analyze these intermediate representations (such as distri-bution analysis, clustering, trend analysis, and association rules), and visualization of the results. This book presents a general theory of text mining along with the main tech-niques behind it. We offer a generalized architecture for text mining and outline the algorithms and data structures typically used by text mining systems. The book is aimed at the advanced undergraduate students, graduate students, academic researchers, and professional practitioners interested in complete cov-erage of the text mining field. We have included all the topics critical to people who plan to develop text mining systems or to use them. In particular, we have covered preprocessing techniques such as text categorization, text clustering, and information extraction and analysis techniques such as association rules and link analysis. The book tries to blend together theory and practice; we have attempted to provide many real-life scenarios that show how the different techniques are used in practice. When writing the book we tried to make it as self-contained as possible and have compiled a comprehensive bibliography for each topic so that the reader can expand his or her knowledge accordingly. x Preface xi BOOK OVERVIEW The book starts with a gentle introduction to text mining that presents the basic definitions and prepares the reader for the next chapters. In the second chapter we describe the core text mining operations in detail while providing examples for each operation. The third chapter serves as an introduction to text mining preprocess-ing techniques. We provide a taxonomy of the operations and set the ground for Chapters IV through VII. Chapter IV offers a comprehensive description of the text categorization problem and outlines the major algorithms for performing text categorization. Chapter V introduces another important text preprocessing task called text clus-tering, and we again provide a concrete definition of the problem and outline the major algorithms for performing text clustering. Chapter VI addresses what is prob-ably the most important text preprocessing technique for text mining – namely, infor-mation extraction. We describe the general problem of information extraction and supply the relevant definitions. Several examples of the output of information extrac-tion in several domains are also presented. In Chapter VII, we discuss several state-of-the-art probabilistic models for infor-mation extraction, and Chapter VIII describes several preprocessing applications that either use the probabilistic models of Chapter VII or are based on hybrid approaches incorporating several models. The presentation layer of a typical text mining system is considered in Chapter IX. We focus mainly on aspects related to browsing large document collections and on issues related to query refinement. Chapter X surveys the common visualization techniques used either to visualize the document collection or the results obtained from the text mining operations. Chap-ter XI introduces the fascinating area of link analysis. We present link analysis as an analytical step based on the foundation of the text preprocessing techniques dis-cussed in the previous chapters, most specifically information extraction. The chapter begins with basic definitions from graph theory and moves to common techniques for analyzing large networks of entities. Finally, in Chapter XII, three real-world applications of text mining are consid-ered. We begin by describing an application for articles posted in BioWorld magazine. This application identifies major biological entities such as genes and proteins and enables visualization of relationships between those entities. We then proceed to the GeneWays application, which is based on analysis of PubMed articles. The next application is based on analysis of U.S. patents and enables monitoring trends and visualizing relationships between inventors, assignees, and technology terms. The appendix explains the DIAL language, which is a dedicated information extraction language. We outline the structure of the language and describe its exact syntax. We also offer several code examples that show how DIAL can be used to extract a variety of entities and relationships. A detailed bibliography concludes the book. ACKNOWLEDGMENTS This book would not have been possible without the help of many individuals. In addition to acknowledgments made throughout the book, we feel it important to xii Preface take the time to offer special thanks to an important few. Among these we would like to mention especially Benjamin Rosenfeld, who devoted many hours to revis-ing the categorization and clustering chapters. The people at ClearForest Corpora-tion also provided help in obtaining screen shots of applications using ClearForest technologies – most notably in Chapter XII. In particular, we would like to mention the assistance we received from RafiVesserman, Yonatan Aumann, Jonathan Schler, Yair Liberzon, Felix Harmatz, and Yizhar Regev. Their support meant a great deal to us in the completion of this project. Adding to this list, we would also like to thank Ian Bonner and Kathy Bentaieb of Inxight Software for the screen shots used in Chapter X. Also, we would like to extend our appreciation to Andrey Rzhetsky for his personal screen shots of the GeneWays application. A book written on a subject such as text mining is inevitably a culmination of many years of work. As such, our gratitude is extended to both Haym Hirsh and Oren Etzioni, early collaborators in the field. In addition, we would like to thank Lauren Cowles of Cambridge University Press for reading our drafts and patiently making numerous comments on how to improve the structure of the book and its readability. Appreciation is also owed to Jessica Farris for help in keeping two very busy coauthors on track. Finally it brings us great pleasure to thank those dearest to us – our children Yael, Hadar, Yair, Neta and Frithjof – for leaving us undisturbed in our rooms while we were writing. We hope that, now that the book is finished, we will have more time to devote to you and to enjoy your growth. We are also greatly indebted to our dear wives Hedva and Lauren for bearing with our long hours on the computer, doing research, and writing the endless drafts. Without your help, confidence, and support we would never have completed this book. Thank you for everything. We love you! I Introduction to Text Mining I.1 DEFINING TEXT MINING Text mining can be broadly defined as a knowledge-intensive process in which a user interacts with a document collection over time by using a suite of analysis tools. In a manner analogous to data mining, text mining seeks to extract useful information from data sources through the identification and exploration of interesting patterns. In the case of text mining, however, the data sources are document collections, and interesting patterns are found not among formalized database records but in the unstructured textual data in the documents in these collections. Certainly, text mining derives much of its inspiration and direction from seminal research on data mining. Therefore, it is not surprising to find that text mining and data mining systems evince many high-level architectural similarities. For instance, both types of systems rely on preprocessing routines, pattern-discovery algorithms, and presentation-layer elements such as visualization tools to enhance the browsing of answer sets. Further, text mining adopts many of the specific types of patterns in its core knowledge discovery operations that were first introduced and vetted in data mining research. Because data mining assumes that data have already been stored in a structured format, much of its preprocessing focus falls on two critical tasks: Scrubbing and normalizing data and creating extensive numbers of table joins. In contrast, for text mining systems, preprocessing operations center on the identification and extrac-tion of representative features for natural language documents. These preprocessing operations are responsible for transforming unstructured data stored in document collections into a more explicitly structured intermediate format, which is a concern that is not relevant for most data mining systems. Moreover, because of the centrality of natural language text to its mission, text mining also draws on advances made in other computer science disciplines concerned with the handling of natural language. Perhaps most notably, text mining exploits techniques and methodologies from the areas of information retrieval, information extraction, and corpus-based computational linguistics. 1 2 Introduction to Text Mining I.1.1 The Document Collection and the Document A key element of text mining is its focus on the document collection. At its simplest, a document collection can be any grouping of text-based documents. Practically speaking, however, most text mining solutions are aimed at discovering patterns across very large document collections. The number of documents in such collections can range from the many thousands to the tens of millions. Document collections can be either static, in which case the initial complement of documents remains unchanged, or dynamic, which is a term applied to document collections characterized by their inclusion of new or updated documents over time. Extremely large document collections, as well as document collections with very high rates of document change, can pose performance optimization challenges for various components of a text mining system. An illustration of a typical real-world document collection suitable as initial input for text mining is PubMed, the National Library of Medicine’s online repository of citation-related information for biomedical research papers. PubMed has received significant attention from computer scientists interested in employing text mining techniques because this online service contains text-based document abstracts for more than 12 million research papers on topics in the life sciences. PubMed represents the most comprehensive online collection of biomedical research papers published in the English language, and it houses data relating to a considerable selection of publications in other languages as well. The publication dates for the main body of PubMed’s collected papers stretch from 1966 to the present. The collection is dynamic and growing, for an estimated 40,000 new biomedical abstracts are added every month. Even subsections of PubMed’s data repository can represent substantial doc-ument collections for specific text mining applications. For instance, a relatively recent PubMed search for only those abstracts that contain the words protein or gene returned a result set of more than 2,800,000 documents, and more than 66 percent of these documents were published within the last decade. Indeed, a very narrowly defined search for abstracts mentioning epidermal growth factor receptor returned more than 10,000 documents. The sheer size of document collections like that represented by PubMed makes manual attempts to correlate data across documents, map complex relationships, or identify trends at best extremely labor-intensive and at worst nearly impossible to achieve. Automatic methods for identifying and exploring interdocument data relationships dramatically enhance the speed and efficiency of research activities. Indeed, in some cases, automated exploration techniques like those found in text mining are not just a helpful adjunct but a baseline requirement for researchers to be able, in a practicable way, to recognize subtle patterns across large numbers of natural language documents. Text mining systems, however, usually do not run their knowledge discovery algo-rithms on unprepared document collections. Considerable emphasis in text mining is devoted to what are commonly referred to as preprocessing operations. Typical text mining preprocessing operations are discussed in detail in Chapter III. Text mining preprocessing operations include a variety of different types of tech-niques culled and adapted from information retrieval, information extraction, and I.1 Defining Text Mining 3 computational linguistics research that transform raw, unstructured, original-format content (like that which can be downloaded from PubMed) into a carefully struc-tured, intermediate data format. Knowledge discovery operations, in turn, are oper-ated against this specially structured intermediate representation of the original doc-ument collection. The Document Another basic element in text mining is the document. For practical purposes, a document can be very informally defined as a unit of discrete textual data within a collection that usually, but not necessarily, correlates with some real-world document such as a business report, legal memorandum, e-mail, research paper, manuscript, article, press release, or news story. Although it is not typical, a document can be defined a little less arbitrarily within the context of a particular document collection by describing a prototypical document based on its representation of a similar class of entities within that collection. One should not, however, infer from this that a given document necessarily exists only within the context of one particular collection. It is important to recognize that a document can (and generally does) exist in any number or type of collections – from the very formally organized to the very ad hoc. A document can also be a member of different document collections, or different subsets of the same document collection, and can exist in these different collections at the same time. For example, a docu-ment relating to Microsoft’s antitrust litigation could exist in completely different document collections oriented toward current affairs, legal affairs, antitrust-related legal affairs, and software company news. “Weakly Structured” and “Semistructured” Documents Despite the somewhat misleading label that it bears as unstructured data, a text document may be seen, from many perspectives, as a structured object. From a lin-guistic perspective, even a rather innocuous document demonstrates a rich amount of semantic and syntactical structure, although this structure is implicit and to some degree hidden in its textual content. In addition, typographical elements such as punctuation marks, capitalization, numerics, and special characters – particularly when coupled with layout artifacts such as white spacing, carriage returns, underlin-ing, asterisks, tables, columns, and so on – can often serve as a kind of “soft markup” language, providing clues to help identify important document subcomponents such as paragraphs, titles, publication dates, author names, table records, headers, and footnotes. Word sequence may also be a structurally meaningful dimension to a document. At the other end of the “unstructured” spectrum, some text documents, like those generated from a WYSIWYG HTML editor, actually possess from their inception more overt types of embedded metadata in the form of formalized markup tags. Documents that have relatively little in the way of strong typographical, layout, or markup indicators to denote structure – like most scientific research papers, business reports, legal memoranda, and news stories – are sometimes referred to as free-format or weakly structured documents. On the other hand, documents with extensive and consistent format elements in which field-type metadata can be more easily inferred – such as some e-mail, HTML Web pages, PDF files, and word-processing 4 Introduction to Text Mining files with heavy document templating or style-sheet constraints – are occasionally described as semistructured documents. I.1.2 Document Features The preprocessing operations that support text mining attempt to leverage many different elements contained in a natural language document in order to transform it from an irregular and implicitly structured representation into an explicitly struc-tured representation. However, given the potentially large number of words, phrases, sentences, typographical elements, and layout artifacts that even a short document may have – not to mention the potentially vast number of different senses that each of these elements may have in various contexts and combinations – an essential task for most text mining systems is the identification of a simplified subset of document features that can be used to represent a particular document as a whole. We refer to such a set of features as the representational model of a document and say that indi-vidual documents are represented by the set of features that their representational models contain. Even with attempts to develop efficient representational models, each document in a collection is usually made up of a large number – sometimes an exceedingly large number – of features. The large number of features required to represent documents in a collection affects almost every aspect of a text mining system’s approach, design, and performance. Problems relating to high feature dimensionality (i.e., the size and scale of possible combinations of feature values for data) are typically of much greater magnitude in text mining systems than in classic data mining systems. Structured representations of natural language documents have much larger numbers of potentially representative features – and thus higher numbers of possible combinations of feature values – than one generally finds with records in relational or hierarchical databases. For even the most modest document collections, the number of word-level fea-tures required to represent the documents in these collections can be exceedingly large. For example, in an extremely small collection of 15,000 documents culled from Reuters news feeds, more than 25,000 nontrivial word stems could be identified. Even when one works with more optimized feature types, tens of thousands of concept-level features may still be relevant for a single application domain. The number of attributes in a relational database that are analyzed in a data mining task is usually significantly smaller. The high dimensionality of potentially representative features in document col-lections is a driving factor in the development of text mining preprocessing operations aimed at creating more streamlined representational models. This high dimension-ality also indirectly contributes to other conditions that separate text mining systems from data mining systems such as greater levels of pattern overabundance and more acute requirements for postquery refinement techniques. Another characteristic of natural language documents is what might be described as feature sparsity. Only a small percentage of all possible features for a document collection as a whole appears in any single document, and thus when a document is represented as a binary vector of features, nearly all values of the vector are zero. I.1 Defining Text Mining 5 The tuple dimension is also sparse. That is, some features often appear in only a few documents, which means that the support of many patterns is quite low. Commonly Used Document Features: Characters, Words, Terms, and Concepts Because text mining algorithms operate on the feature-based representations of documents and not the underlying documents themselves, there is often a trade-off between two important goals. The first goal is to achieve the correct calibration of the volume and semantic level of features to portray the meaning of a document accurately, which tends to incline text mining preprocessing operations toward select-ing or extracting relatively more features to represent documents. The second goal is to identify features in a way that is most computationally efficient and practical for pattern discovery, which is a process that emphasizes the streamlining of repre-sentative feature sets; such streamlining is sometimes supported by the validation, normalization, or cross-referencing of features against controlled vocabularies or external knowledge sources such as dictionaries, thesauri, ontologies, or knowledge bases to assist in generating smaller representative sets of more semantically rich features. Although many potential features can be employed to represent documents,1 the following four types are most commonly used: ■Characters. The individual component-level letters, numerals, special characters and spaces are the building blocks of higher-level semantic features such as words, terms, and concepts. A character-level representation can include the full set of all characters for a document or some filtered subset. Character-based representa-tionswithoutpositionalinformation(i.e.,bag-of-charactersapproaches)areoften ofverylimitedutilityintextminingapplications.Character-basedrepresentations that include some level of positional information (e.g., bigrams or trigrams) are somewhat more useful and common. In general, however, character-based rep-resentations can often be unwieldy for some types of text processing techniques because the feature space for a document is fairly unoptimized. On the other hand, this feature space can in many ways be viewed as the most complete of any representation of a real-world text document. ■Words. Specific words selected directly from a “native” document are at what might be described as the basic level of semantic richness. For this reason, word-level features are sometimes referred to as existing in the native feature space of a document. In general, a single word-level feature should equate with, or have the value of, no more than one linguistic token. Phrases, multiword expressions, or even multiword hyphenates would not constitute single word-level features. It is possible for a word-level representation of a document to include a feature for each word within that document – that is the “full text,” where a document is represented by a complete and unabridged set of its word-level features. This can 1 Beyond the three feature types discussed and defined here – namely, words, terms, and concepts – other features that have been used for representing documents include linguistic phrases, nonconsecutive phrases, keyphrases, character bigrams, character trigrams, frames, and parse trees. 6 Introduction to Text Mining lead to some word-level representations of document collections having tens or even hundreds of thousands of unique words in its feature space. However, most word-level document representations exhibit at least some minimal optimization and therefore consist of subsets of representative features filtered for items such as stop words, symbolic characters, and meaningless numerics. ■Terms. Terms are single words and multiword phrases selected directly from the corpus of a native document by means of term-extraction methodologies. Term-level features, in the sense of this definition, can only be made up of specific words and expressions found within the native document for which they are meant to be generally representative. Hence, a term-based representation of a document is necessarily composed of a subset of the terms in that document. For example, if a document contained the sentence President Abraham Lincoln experienced a career that took him from log cabin to White House, a list of terms to represent the document could include single word forms such as “Lincoln,” “took,” “career,” and “cabin” as well as multiword forms like “Presi-dent Abraham Lincoln,” “log cabin,” and “White House.” Several of term-extraction methodologies can convert the raw text of a native document into a series of normalized terms – that is, sequences of one or more tokenized and lemmatized word forms associated with part-of-speech tags. Some-times an external lexicon is also used to provide a controlled vocabulary for term normalization. Term-extraction methodologies employ various approaches for generating and filtering an abbreviated list of most meaningful candidate terms from among a set of normalized terms for the representation of a document. This culling process results in a smaller but relatively more semantically rich document representation than that found in word-level document representations. ■Concepts.2 Concepts are features generated for a document by means of man-ual, statistical, rule-based, or hybrid categorization methodologies. Concept-level features can be manually generated for documents but are now more commonly extracted from documents using complex preprocessing routines that identify sin-gle words, multiword expressions, whole clauses, or even larger syntactical units that are then related to specific concept identifiers. For instance, a document col-lection that includes reviews of sports cars may not actually include the specific word “automotive” or the specific phrase “test drives,” but the concepts “auto-motive” and “test drives” might nevertheless be found among the set of concepts used to to identify and represent the collection. Many categorization methodologies involve a degree of cross-referencing against an external knowledge source; for some statistical methods, this source might simply be an annotated collection of training documents. For manual and rule-based categorization methods, the cross-referencing and validation of prospective concept-level features typically involve interaction with a “gold standard” such as a preexisting domain ontology, lexicon, or formal concept 2 Although some computer scientists make distinctions between keywords and concepts (e.g., Blake and Pratt 2001), this book recognizes the two as relatively interchangeable labels for the same feature type and will generally refer to either under the label concept. I.1 Defining Text Mining 7 hierarchy – or even just the mind of a human domain expert. Unlike word- and term-level features, concept-level features can consist of words not specifically found in the native document. Of the four types of features described here, terms and concepts reflect the fea-tures with the most condensed and expressive levels of semantic value, and there are many advantages to their use in representing documents for text mining pur-poses. With regard to the overall size of their feature sets, term- and concept-based representations exhibit roughly the same efficiency but are generally much more efficient than character- or word-based document models. Term-level representa-tions can sometimes be more easily and automatically generated from the original source text (through various term-extraction techniques) than concept-level rep-resentations, which as a practical matter have often entailed some level of human interaction. Concept-level representations, however, are much better than any other feature-set representation at handling synonymy and polysemy and are clearly best at relat-ing a given feature to its various hyponyms and hypernyms. Concept-based rep-resentations can be processed to support very sophisticated concept hierarchies, and arguably provide the best representations for leveraging the domain knowledge afforded by ontologies and knowledge bases. Still, concept-level representations do have a few potential drawbacks. Possi-ble disadvantages of using concept-level features to represent documents include (a) the relative complexity of applying the heuristics, during preprocessing opera-tions, required to extract and validate concept-type features and (b) the domain-dependence of many concepts.3 Concept-level document representations generated by categorization are often stored in vector formats. For instance, both CDM-based methodologies and Los Alamos II–type concept extraction approaches result in individual documents being stored as vectors. Hybrid approaches to the generation of feature-based document representations can exist. By way of example, a particular text mining system’s preprocessing oper-ations could first extract terms using term extraction techniques and then match or normalize these terms, or do both, by winnowing them against a list of meaning-ful entities and topics (i.e., concepts) extracted through categorization. Such hybrid approaches, however, need careful planning, testing, and optimization to avoid hav-ing dramatic – and extremely resource-intensive – growth in the feature dimensional-ity of individual document representations without proportionately increased levels of system effectiveness. For the most part, this book concentrates on text mining solutions that rely on documents represented by concept-level features, referring to other feature types where necessary to highlight idiosyncratic characteristics or techniques. Neverthe-less, many of the approaches described in this chapter for identifying and browsing patterns within document collections based on concept-level representations can also 3 It should at least be mentioned that there are some more distinct disadvantages to using manually generated concept-level representations. For instance, manually generated concepts are fixed, labor-intensive to assign, and so on. See Blake and Pratt (2001). 8 Introduction to Text Mining be applied – perhaps with varying results – to document collections represented by other feature models. Domains and Background Knowledge In text mining systems, concepts belong not only to the descriptive attributes of a particular document but generally also to domains. With respect to text mining, a domain has come to be loosely defined as a specialized area of interest for which dedicated ontologies, lexicons, and taxonomies of information may be developed. Domains can include very broad areas of subject matter (e.g., biology) or more narrowly defined specialisms (e.g., genomics or proteomics). Some other noteworthy domains for text mining applications include financial services (with significant sub-domains like corporate finance, securities trading, and commodities.), world affairs, international law, counterterrorism studies, patent research, and materials science. Text mining systems with some element of domain-specificity in their orientation – that is, most text mining systems designed for a practical purpose – can leverage information from formal external knowledge sources for these domains to greatly enhance elements of their preprocessing, knowledge discovery, and presentation-layer operations. Domain knowledge, perhaps more frequently referred to in the literature as background knowledge, can be used in text mining preprocessing operations to enhance concept extraction and validation activities. Access to background knowl-edge – although not strictly necessary for the creation of concept hierarchies within the context of a single document or document collection – can play an important role in the development of more meaningful, consistent, and normalized concept hierarchies. Text mining makes use of background knowledge to a greater extent than, and in different ways from, data mining. For advanced text mining applications that can take advantage of background knowledge, features are not just elements in a flat set, as is most often the case in structured data applications. By relating features by way of lexicons and ontologies, advanced text mining systems can create fuller representations of document collections in preprocessing operations and support enhanced query and refinement functionalities. Indeed, background knowledge can be used to inform many different elements of a text mining system. In preprocessing operations, background knowledge is an important adjunct to classification and concept-extraction methodologies. Back-ground knowledge can also be leveraged to enhance core mining algorithms and browsing operations. In addition, domain-oriented information serves as one of the main bases for search refinement techniques. In addition, background knowledge may be utilized by other components of a text mining system. For instance, background knowledge may be used to construct meaningful constraints in knowledge discovery operations. Likewise, background knowledge may also be used to formulate constraints that allow users greater flexi-bility when browsing large result sets. I.1.3 The Search for Patterns and Trends Although text mining preprocessing operations play the critical role of trans-forming unstructured content of a raw document collection into a more tractable I.1 Defining Text Mining 9 concept-level data representation, the core functionality of a text mining system resides in the analysis of concept co-occurrence patterns across documents in a col-lection. Indeed, text mining systems rely on algorithmic and heuristic approaches to consider distributions, frequent sets, and various associations of concepts at an interdocument level in an effort to enable a user to discover the nature and relation-ships of concepts as reflected in the collection as a whole. Forexample,inacollectionofnewsarticles,alargenumberofarticlesonpolitician X and “scandal” may indicate a negative image of the character of X and alert his or her handlers to the need for a new public relations campaign. Or, a growing number of articles on company Y and product Z may indicate a shift of focus in company Y’s interests – a shift that should be noted by its competitors. In another example, a potential relationship might be inferred between two proteins P1 and P2 by the pattern of (a) several articles mentioning the protein P1 in relation to the enzyme E1, (b) a few articles describing functional similarities between enzymes E1 and E2 without referring to any protein names, and (c) several articles linking enzyme E2 to protein P2. In all three of these examples, the information is not provided by any single document but rather from the totality of the collection. Text mining’s methods of pattern analysis seek to discover co-occurrence relationships between concepts as reflected by the totality of the corpus at hand. Text mining methods – often based on large-scale, brute-force search directed at large, high-dimensionality feature sets – generally produce very large numbers of patterns. This results in an overabundance problem with respect to identified patterns that is usually much more severe than that encountered in data mining applications aimed at structured data sources. A main operational task for text mining systems is to enable a user to limit pattern overabundance by providing refinement capabilities that key on various specifiable measures of “interestingness” for search results. Such refinement capabilities prevent system users from getting overwhelmed by too many uninteresting results. The problem of pattern overabundance can exist in all knowledge discovery activ-ities. It is simply heightened when interacting with large collections of text documents, and, therefore, text mining operations must necessarily be conceived to provide not only relevant but also manageable result sets to a user. Text mining also builds on various data mining approaches first specified in Lent, Agrawal, and Srikant (1997) to identify trends in data. In text mining, trend analysis relies on date-and-time stamping of documents within a collection so that comparisons can be made between a subset of documents relating to one period and a subset of documents relating to another. Trend analysis across document subsets attempts to answer certain types of ques-tions. For instance, in relation to a collection of news stories, Montes-y-Gomez, Gelbukh, and Lopez-Lopez (2001b) suggests that trend analysis concerns itself with questions such as the following: ■What is the general trend of the news topics between two periods (as represented by two different document subsets)? ■Are the news topics nearly the same or are they widely divergent across the two periods? ■Can emerging and disappearing topics be identified? ■Did any topics maintain the same level of occurrence during the two periods? 10 Introduction to Text Mining In these illustrative questions, individual “news topics” can be seen as specific con-cepts in the document collection. Different types of trend analytics attempt to com-pare the frequencies of such concepts (i.e., number of occurrences) in the docu-ments that make up the two periods’ respective document subcollections. Additional types of analysis, also derived from data mining, that can be used to support trend analysis are ephemeral association discovery and deviation detection. Some specific methods of trend analysis are described in Section II.1.5. I.1.4 The Importance of the Presentation Layer Perhaps the key presentation layer functionality supported by text mining systems is browsing. Most contemporary text mining systems support browsing that is both dynamic and content-based, for the browsing is guided by the actual textual content of a particular document collection and not by anticipated or rigorously prespecified structures. Commonly, user browsing is facilitated by the graphical presentation of concept patterns in the form of a hierarchy to aid interactivity by organizing concepts for investigation. Browsing is also navigational. Text mining systems confront a user with extremely large sets of concepts obtained from potentially vast collections of text documents. Consequently, text mining systems must enable a user to move across these concepts in such a way as to always be able to choose either a “big picture” view of the collection in toto or to drill down on specific – and perhaps very sparsely identified – concept relationships. Visualization tools are often employed by text mining systems to facilitate navi-gation and exploration of concept patterns. These use various graphical approaches to express complex data relationships. In the past, visualization tools for text min-ing sometimes generated static maps or graphs that were essentially rigid snapshots of patterns or carefully generated reports displayed on the screen or printed by an attached printer. State-of-the-art text mining systems, however, now increasingly rely on highly interactive graphic representations of search results that permit a user to drag, pull, click, or otherwise directly interact with the graphical representation of concept patterns. Visualization approaches, like that seen in Figure I.1, are discussed more fully in Chapter X. Several additional types of functionality are commonly supported within the front ends of text mining systems. Because, in many respects, the presentation layer of a text mining system really serves as the front end for the execution of the system’s core knowledge discovery algorithms, considerable attention has been focused on provid-ing users with friendlier and more powerful methods for executing these algorithms. Such methods can become powerful and complex enough to necessitate developing dedicated query languages to support the efficient parameterization and execution of specific types of pattern discovery queries. The use of the presentation layer for query execution and simple browsing is discussed in Chapter IX. At the same time, consistent with an overall emphasis on user empowerment, the designers of many text mining systems have moved away from limiting a user to running only a certain number of fixed, preprogrammed search queries. Instead, these text mining systems are designed to expose much of their search functionality to the user by opening up direct access to their query languages by means of query language interfaces or command-line query interpreters. I.1 Defining Text Mining 11 alfred goldman bette massick bill milton bill vogel gil amelio greg nie jeffrey logsdon jill krutick kleinwort benson ladenburg thalmann laura lederman lawrence cohn louis gerstner marco landi marty kearney michael spindler philip anschutz pieter hartsook ralph bloch roxane googin samuel zell scott mcadams stephen wozniak steve mcclellan tim bajarin tony dwyer william blair goldman sachs paine webber inc morgan stanley inc merrill lynch inc smith barney inc bear stearns co international business machine inc sun microsystems inc Computer Companies Brokerage Houses people Figure I.1. Example of a visualization tool – mapping concepts (keywords) within the context of categories by means of a “category graph.” (From Feldman, Kloesgen, Ben-Yehuda, et al. 1997.) Furthermore, text mining front ends may offer a user the ability to cluster concepts through a suite of clustering tools (discussed in Chapter V) in ways that make the most cognitive sense for a particular application or task. Text mining systems can also allow a user to create customized profiles for concepts or concept relationships to produce a richer knowledge environment for interactive exploration. Finally, some text mining systems offer users the ability to manipulate, create, or concatenate refinement constraints to assist in producing more manageable and useful result sets for browsing. Like other aspects relating to the creation, shaping, and parameterization of queries, the use of such refinement constraints can be made much more user-friendly by incorporating graphical elements such as pull-downs, radio boxes, or context- or query-sensitive pick lists. I.1.5 Citations and Notes Sections I.1–I.1.1 UsefulintroductionstotextminingincludeFeldmanandDagan(1995),Dixon(1997), Rajman and Besancon (1997b), Feldman (1998), Rajman and Besancon (1998), Hearst (1999a), Tan (1999), and Porter (2002). Feldman (1998) points out some of the distinctions between classic data mining preprocessing operations, such as table joins, and those of text mining systems. Feld-man and Hirsh (1996) discusses text mining’s indebtedness to information retrieval. Feldman, Fresko, Hirsh et al. (1998) and Nahm and Mooney (2000), among other works, indicate text mining’s dependence on information extraction methodologies – especially in terms of preprocessing operations. Hearst (1999) notes text mining’s relatedness to some elements of corpus-based computational linguistics. PubMed, developed by the National Center for Biotechnology Information (NCBI) at the National Library of Medicine (NLM), a division of the U.S. National Institutes of Health (NIH), is the overall name given to the NLM’s database access system, which provides access to resources such as the MEDLINE and 12 Introduction to Text Mining OLDMEDLINE databases. Full information on PubMed can be found at . Hirschman et al. (2002) and Blake and Pratt (2001) both highlight PubMed’s attractiveness as a data source for text mining systems. The estimate that 40,000 new biomedical abstracts are being added to PubMed every month comes from Pustejovsky et al. (2002). Rajman and Besancon (1998) introduced the notion of a prototypical document with respect to text mining document collections. Freitag (1998b) makes the point that a text document can be viewed as a struc-tured object and discusses many of the semantic and syntactical structures that lend structure to a document. Freitag (1998b) and Zaragoza, Massih-Reza, and Galli-nari (1999) both indicate that word sequence may also be a structurally meaningful dimension in documents. Section I.1.2 Blake and Pratt (2001) presents a discussion of document features in a light useful to understanding text mining considerations. The definition of feature dimensionality that we rely on in Chapter II is shaped by the notion as it is described in Pedersen and Bruce (1997). Statistics for the number of word-level features in a collection of 15,000 documents come from Feldman (1998). Yang and Pedersen (1997) points out that tens of thousands of concept-level features may be relevant for a single application domain. Blake and Pratt (2001) and Yang and Pedersen (1997) are generally valuable for understanding some distinctions between different types of document features. The phrase native feature space was borrowed from Yang and Pedersen (1997). Term-extraction methodologies in text mining are fully treated in Feldman, Fresko, Hirsh et al. (1998). Feldman et al. (2002), Hull (1996), and Brill (1995) are classic works on information extraction useful for understanding lemmatized forms, normalized terms, and so on. Although some computer scientists make distinctions between keywords and concepts (e.g., Blake and Pratt 2001), this book recognizes the two as relatively interchangeable labels for the same feature type and will generally refer to either under the label concept. It should at least be mentioned that there are some more distinct disadvantages to using manually generated concept-level representations. Manually generated con-cepts, for example, are fixed and labor-intensive to assign (Blake and Pratt 2001). CDM-based methodologies are discussed in Goldberg (1996). Feldman and Hirsh (1996a) presents one of the first formal discussions regard-ing the use of background knowledge in text mining. Other relevant works include Kosmynin and Davidson (1996); Zelikovitz and Hirsh (2000); and Hotho, Staab, and Stumme (2003). Section I.1.3 Feldman, Kloesgen, Ben-Yehuda, et al. (1997) provides an early treatment of knowl-edge discovery based on co-occurrence relationships between concepts in documents within a document collection. Lent, Agrawal, and Srikant (1997) is the seminal early work for identifying trends in large amounts of textual data. The high-level I.2 General Architecture of Text Mining Systems 13 questions important to trend analysis identified in Section I.1.3 are based on similar questions presented in Montes-y-Gomez et al. (2001b). The terms ephemeral asso-ciation discovery and deviation detection are used here in the manner introduced in Montes-y-Gomez et al. (2001b). Section I.1.4 Treatments of browsing germane to text mining and related applications include Chang and Rice (1993); Dagan, Feldman, and Hirsh (1996); Feldman, Kloesgen, Ben-Yehuda, et al. (1997); Smith (2002); and Dzbor, Domingue, and Motta (2004). Browsing is discussed in Chapter IX, while a detailed discussion of more elaborate visualization approaches for supporting user interactivity in text mining applications can be found in Chapter X. I.2 GENERAL ARCHITECTURE OF TEXT MINING SYSTEMS At an abstract level, a text mining system takes in input (raw documents) and gener-ates various types of output (e.g., patterns, maps of connections, trends). Figure I.2 illustrates this basic paradigm. A human-centered view of knowledge discovery, how-ever, yields a slightly more complex input–output paradigm for text mining (see Figure I.3). This paradigm is one in which a user is part of what might be seen as a prolonged interactive loop of querying, browsing, and refining, resulting in answer sets that, in turn, guide the user toward new iterative series of querying, browsing, and refining actions. I.2.1 Functional Architecture On a functional level, text mining systems follow the general model provided by some classic data mining applications and are thus roughly divisible into four main areas: (a) preprocessing tasks, (b) core mining operations, (c) presentation layer components and browsing functionality, and (d) refinement techniques. ■Preprocessing Tasks include all those routines, processes, and methods required to prepare data for a text mining system’s core knowledge discovery operations. These tasks typically center on data source preprocessing and categorization Documents Input Output Patterns Connections Trends Figure I.2. Simple input–output model for text mining. 14 Introduction to Text Mining Documents Input Output Patterns Connections Trends User Iterative Output: New Result-Sets, Result Subsets Iterative Input: Queries, Browsing, Added or Subtracted Constraints Figure I.3. Iterative loop for user input and output. activities. Preprocessing tasks generally convert the information from each orig-inal data source into a canonical format before applying various types of feature extraction methods against these documents to create a new collection of doc-uments fully represented by concepts. Where possible, preprocessing tasks may also either extract or apply rules for creating document date stamps, or do both. Occasionally, preprocessing tasks may even include specially designed methods used in the initial fetching of appropriate “raw” data from disparate original data sources. ■Core Mining Operations are the heart of a text mining system and include pat-tern discovery, trend analysis, and incremental knowledge discovery algorithms. Among the commonly used patterns for knowledge discovery in textual data are distributions (and proportions), frequent and near frequent concept sets, and associations. Core mining operations can also concern themselves with compar-isons between – and the identification of levels of “interestingness” in – some of these patterns. Advanced or domain-oriented text mining systems, or both, can also augment the quality of their various operations by leveraging background knowledge sources. These core mining operations in a text mining system have also been referred to, collectively, as knowledge distillation processes. ■Presentation Layer Components include GUI and pattern browsing functionality as well as access to the query language. Visualization tools and user-facing query editors and optimizers also fall under this architectural category. Presentation-layer components may include character-based or graphical tools for creating or modifying concept clusters as well as for creating annotated profiles for specific concepts or patterns. ■Refinement Techniques, at their simplest, include methods that filter redundant information and cluster closely related data but may grow, in a given text mining system, to represent a full, comprehensive suite of suppression, ordering, pruning, generalization, and clustering approaches aimed at discovery optimization. These techniques have also been described as postprocessing. Preprocessing tasks and core mining operations are the two most critical areas for any text mining system and typically describe serial processes within a generalized view of text mining system architecture, as shown in Figure I.4. I.2 General Architecture of Text Mining Systems 15 Preprocessing Tasks Categorization, Feature/Term Extraction Processed Document Collection (categorized, keyword-labeled, time-stamped) Text Documents Core Mining Operations and Presentation Pattern Discovery, Trend Analysis, Browsing, Visualization User Figure I.4. High-level text mining functional architecture. At a slightly more granular level of detail, one will often find that the processed document collection is, itself, frequently intermediated with respect to core mining operations by some form of flat, compressed or hierarchical representation, or both, of its data to better support various core mining operations such as hierarchical tree browsing. This is illustrated in Figure I.5. The schematic in Figure I.5 also factors in the typical positioning of refinement functionality. Further, it adds somewhat more Preprocessing Tasks Categorization, Feature/Term Extraction Processed Document Collection (categorized, keyword-labeled, time-stamped) Text Mining Discovery Algorithms Pattern Identification, Trend Analysis Browsing Functionality Simple Filters, Query Interpreter, Search Interpreter, Visualization Tools, GUI, Graphing News and Email WWW & FTP Resources Other Online Resources Document Fetching/ Crawling Techniques User Compressed or Hierarchical Representation Refinement Techniques Suppression, Ordering, Pruning, Generalization, Clustering Figure I.5. System architecture for generic text mining system. 16 Introduction to Text Mining Preprocessing Tasks Categorization, Feature/Term Extraction Processed Document Collection (categorized, keyword-labeled, time-stamped) Text Mining Discovery Algorithms Pattern Identification, Trend Analysis Browsing Functionality Simple Filters, Query Interpreter, Search Interpreter, Visualization Tools, GUI, Graphing News and Email WWW & FTP Resources Other Online Resources Document Fetching/ Crawling Techniques User Compressed and/or Hierarchical Representation Refinement Techniques Suppression, Ordering, Pruning, Generalization, Clustering Parsing Routines Knowledge Sources Background Knowledge Figure I.6. System architecture for an advanced or domain-oriented text mining system. detail with respect to relative functioning of core data mining algorithms. Many text mining systems – and certainly those operating on highly domain-specific data sources, such as medicine, financial services, high tech, genomics, proteomics, and chemical compounds – can benefit significantly from access to special background or domain-specific data sources. See Figure I.6. Background knowledge is often used for providing constraints to, or auxiliary information about, concepts found in the text mining collection’s document collec-tion. The background knowledge for a text mining system can be created in various ways.Onecommonwayistorunparsingroutinesagainstexternalknowledgesources, such as formal ontologies, after which unary or binary predicates for the concept-labeled documents in the text mining system’s document collection are identified. These unary and binary predicates, which describe properties of the entities rep-resented by each concept deriving from the expert or “gold standard” information sources, are in turn put to use by a text mining system’s query engine. In addition, such constraints can be used in a text mining system’s front end to allow a user to either (a) create initial queries based around these constraints or (b) refine queries over time by adding, substracting, or concatenating constraints. Commonly, background knowledge is preserved within a text mining sys-tem’s architecture in a persistent store accessible by various elements of the sys-tem. This type of persistent store is sometimes loosely referred to as a system’s I.2 General Architecture of Text Mining Systems 17 Preprocessing Tasks      Processed Document Collection    Text Mining Discovery Algorithms       !  Browsing Functionality "  #   "$  %   &' &$  (    )))  +  ,$ ,  +  -  $    $ .  '    +  Refinement Techniques "  ,     &       +   Knowledge Sources /  0       Background Knowledge Base Figure I.7. System architecture for an advanced text mining system with background knowl-edge base. knowledge base. The typical position of a knowledge base within the system archi-tecture of a text mining system can be seen in Figure I.7. These generalized architectures are meant to be more descriptive than prescrip-tive in that they represent some of the most common frameworks found in the present generation of text mining systems. Good sense, however, should be the guide for prospective system architects of text mining applications, and thus significant vari-ation on the general themes that have been identified is possible. System architects and developers could include more of the filters typically found in a text mining system’s browser or even within subroutines contained among the system’s store of refinement techniques as “preset” options within search algorithms included in its main discovery algorithms. Likewise, it is conceivable that a particular text mining system’s refinement techniques or main discovery algorithms might later find a very fruitful use for background knowledge. I.2.2 Citations and Notes Section I.2 The view of human-centered knowledge discovery introduced in Brachman and Anand (1996) and to some degree echoed in Grinstein (1996) influences much 18 Introduction to Text Mining of the discussion of text mining systems in Chapter II and indeed throughout this book. Section I.2.1 The architectural elements of the systems elaborated on here reflect a composite of operations developed in several widely described real-world text mining appli-cations, most especially the KDT (Feldman and Dagan 1995), FACT (Feldman and Hirsh 1996a; Feldman and Hirsh 1996b; Feldman and Hirsh 1997), and Document Explorer (Feldman, Kloesgen, Ben Yehuda, et al. 1997; Feldman, Kloesgen, and Zilberstein 1997a) systems. Besides these text mining applications, other systems at least referentially contributing in some way to this composite include the TEXTRISE system (Nahm and Mooney 2000), the SYNDICATE system (Hahn and Schnattinger 1997), the Explora System (Kloesgen 1995b), and the LINDI project (Hearst 1999). In particular, Feldman, Kloesgen, and Zilberstein (1997a) includes a pertinent discussion of the architecture of the Document Explorer System. Tan (1999) also proposes a generalized architecture for text mining systems, using the term “knowl-edge distillation processes” in roughly the same way as this section refers to “core mining operations.” The term “postprocessing” – as a general label for what this book refers to as refinement techniques – comes from Hotho et al. (2002). II Core Text Mining Operations Core mining operations in text mining systems center on the algorithms that underlie the creation of queries for discovering patterns in document collections. This chapter describes most of the more common – and a few useful but less common – forms of these algorithms. Pattern-discovery algorithms are discussed primarily from a high-level definitional perspective. In addition, we examine the incorporation of background knowledge into text mining query operations. Finally, we briefly treat the topic of text mining query languages. II.1 CORE TEXT MINING OPERATIONS Core text mining operations consist of various mechanisms for discovering patterns of concept occurrence within a given document collection or subset of a document collection. The three most common types of patterns encountered in text mining are distributions (and proportions), frequent and near frequent sets, and associations. Typically, when they offer the capability of discovering more than one type of pattern, text mining systems afford users the ability to toggle between displays of the different types of patterns for a given concept or set of concepts. This allows the richest possible exploratory access to the underlying document collection data through a browser. II.1.1 Distributions This section defines and discusses some of text mining’s most commonly used dis-tributions. We illustrate this in the context of a hypothetical text mining system that has a document collection W composed of documents containing news wire stories about world affairs that have all been preprocessed with concept labels. Whether as an initial step, to create a baseline, or to create more meaningful subdivisions of a single document collection for comparison purposes, text mining systems generally need to refer to some subcollection of a complete document collec-tion. This activity is commonly referred to as concept selection. Given some collection 19 20 Core Text Mining Operations of documents D, a text mining system will have a requirement to refer to some sub-collection of D that is labeled by one or more given concepts. Definition II.1. Concept Selection: If D is a collection of documents and K is a set of concepts, D/K is the subset of documents in D labeled with all of the concepts in K. When it is clear from the context, given a single concept k, rather than writing D/{k} we use the notation D/k. For example, the collection W contains a subset of the World Affairs collection – namely those documents that are labeled with the concepts iran, nicaragua, and reagan; W/bush contains the subset of documents that are labeled (at least) with reagan; and W/G8 contains those documents that are labeled with any terminal node under G8 (i.e., labeled with any G8 country). G8 is treated as a concept here when is being performed concept selection (rather than being viewed as the set of concepts under it, in which case it would have required all of its descendants to be present). Text mining systems often need to identify or examine the proportion of a set of documents labeled with a particular concept. This analytic is commonly referred to as concept proportion. Definition II.2. Concept Proportion: If D is a collection of documents and K is a set of concepts, f (D, K) is the fraction of documents in D labeled with all of the concepts in K, that is, f (D, K) = |D/ k| |D| . Given one concept k, rather than writing f (D, {k}), we use the notation f (D, k). When D is clear from context, we drop it and write f (k). Thus, for example, f (W, {iran, nicaragua, reagan} is the fraction of documents in the World Affairs collection labeled with iran, nicaragua, and reagan; f (reagan) is the proportion of the collection labeled with the concept reagan; and f (G8) is the proportion labeled with any (G8) country. By employing definitions of selection and proportion, text mining systems can already begin identifying some useful quantities for analyzing a set of documents. For example, a text mining system might want to identify the proportion of those documents labeled with K2 that are also labeled by K1, which could be designated by expression f(D/K2, K1). This type of proportion occurs regularly enough that it has received an explicit name and notation: conditional concept proportion. Definition II.3. Conditional Concept Proportion: If D is a collection of documents and K1 and K2 are sets of concepts, f (D, K1 | K2) is the proportion of all those documents in D labeled with K2 that are also labeled with K1, that is, f (D, K1 | K2) = f (D/K2, K1). When D is clear from context, we will write this as f (K1 | K2). Applying this definition, we find that f(reagan | iran) would represent the proportion of all documents labeled by the concept iran that are also labeled by the concept reagan. Commonly, a text mining system needs to analyze the distribution of concepts that are descendents of a particular node in a concept hierarchy. For example, a text mining system might need to allow the analysis of the distribution of concepts denoting finance topics – that is, descendents of the finance topics node in an example concept hierarchy. To accomplish this, a text mining system could use the expression II.1 Core Text Mining Operations 21 PK(x) to refer to such distributions – it will assign to any concept x in K a value between 0 and 1 – where the values are not required to add up to 1. This type of proportion can be referred to as a concept distribution. In the follow-ing sections we present several specific examples of such PK(x) distributions. One particularly important concept distribution for knowledge discovery opera-tions is the concept proportion distribution, which gives the proportion of documents in some collection that are labeled with each of a number of selected concepts: Definition II.4. Concept Proportion Distribution: If D is a collection of documents and K is a set of concepts, FK(D, x) is the proportion of documents in D labeled with x for any x in K. When D is clear from context, we will write this as FK(x). Note the distinction between PK(x) and FK(x). PK(x) refers generically to any func-tion that is a concept distribution. FK(x) is a specific concept distribution defined by a particular concept-labeled set of documents. Thus, for example Ftopics(R, x) would represent the proportions of documents in W labeled with keywords under the topics node in the concept hierarchy. In this expression, topics is used as shorthand for referring to a set of concepts – namely, all those that occur under the topics node – instead of explicitly enumerating them all. Also, note that F{k}(D, k) = f (D, k) – that is, FK subsumes the earlier defined f when it is applied to a single concept. Unlike f, however, FK is restricted to refer only to the proportion of occurrences of individual concepts (those occurring in the set K).1 Thus f and F are not comparable. Mathematically, F is not a true frequency distribution, for each document may be labeled by multiple items in the set K. Thus, for example, a given document may be labeled by two (or more) G8 countries because occurrences of concepts are not disjoint events. Therefore, the sum of values in FG8 may be greater than one. In the worst case, if all concepts in K label all documents, the sum of the values in a distribution F can be as large as |K|. Furthermore, because some documents may contain none of the concepts in a given K, the sum of frequencies in F might also be smaller than one – in the worst case, zero. Nonetheless, the term “distribution” is used for F, for many of the connotations this term suggests still hold true. Just as was the case for concept proportions, text mining systems can also leverage conditional keyword-proportion distributions, which are probably one of the most used concept distributions in text mining systems. Definition II.5. Conditional Concept Proportion Distribution: If D is a collection of documents and K and K′ are sets of concepts, FK(D, x | K′) is the proportion of those documents in D labeled with all the concepts in K′ that are also labeled with concept x (with x in K), that is, FK(D, x | K′) = FK(D/K | K′, x). We often write this as FK(x | K′) when D is clear from context. Thus, for example, Ftopics(x | Argentina) would assign any concept x under topics in the hierarchy with the proportion of documents labeled by x within the set of all documents labeled by the concept Argentina, and Ftopics(x | {UK, USA) is the similar distribution for those documents labeled with both the UK and USA concepts. 1 It is also quite simple to define a similar notion for sets of concepts, for example, by computing the proportions for each subset of a set K (Feldman, Dagan, and Hirsh, 1998). 22 Core Text Mining Operations One of the baseline distributions text mining systems use to compare distributions is the average distribution over a set of sibling nodes in the hierarchy. For example, when looking at the proportions of loan within South American countries such as f (W, loan | Argentina), f (W, loan | Brazil), and f (W, loan | Columbia)), an end user may be interested in the average of all proportions of this form for all the South American countries – that is, the average of all proportions of the form f (W, loan |k), where k ranges over all South American countries. Definition II.6. Average Concept Proportion: Given a collection of documents D, a concept k, and an internal node in the hierarchy n, an average concept proportion, denoted by a(D, k | n), is the average value of f (D, k| k′), where k′ ranges over all immediate children of n – that is, a(D, k| n) = Avg{k′ is a child of n}{ f (D, k| k′)}. When D is clear from context, this will be written a(k|n). For example, a(loan | South America) is the average concept proportion of f(loan | k′) as k′ varies over each child of the node South America in the concept hierarchy; that is, it is the average conditional keyword proportion for loan within South American countries. This quantity does not average the values weighted by the number of documents labeled by each child of n. Instead, it equally represents each descendant of n and should be viewed as a summary of what a typical concept proportion is for a child of n. An end user may be interested in the distribution of averages for each economic topic within South American countries. This is just another keyword distribution referred to as an average concept distribution. Definition II.7. Average Concept Distribution: Given a collection of docu-ments D and two internal nodes in the hierarchy n and n′, an average con-cept distribution, denoted by An(Dx | n′), is the distribution that, for any x that is a child of n, averages x’s proportions over all children of n′ – that is, An(D, x | n′) = Avg{k′ is a child of n′}{Fn(D, x | k′)}. When clear from context, this will be written An(x | n′). For example Atopics(x|South America), which can be read as “the average dis-tribution of topics within South American countries,” gives the average proportion within all South American countries for any topic x. A very basic operation for text mining systems using concept-distributions is the display of conditional concept-proportion distributions. For example, a user may be interested in seeing the proportion of documents labeled with each child of topics for all those documents labeled by the concept Argentina, that is, the proportion of Argentina documents that are labeled with each topic keyword. This distribution would be designated by Ftopics(W, x | Argentina), and a correlat-ing graph could be generated, for instance, as a bar chart, which might display the fact that 12 articles among all articles of Argentina are annotated with sorghum, 20 with corn, 32 with grain, and so on, providing a summary of the areas of economical activ-ity of Argentina as reflected in the text collection. Conditional concept-proportion distributions can also be conditioned on sets of concepts. In some sense, this type of operation can be viewed as a more refined form of traditional concept-based retrieval. For example, rather than simply requesting all II.1 Core Text Mining Operations 23 documents labeled by Argentina or by both UK and USA, the user can see the doc-uments at a higher level by requesting documents labeled by Argentina for example, and first seeing what proportions are labeled by concepts from some secondary set of concepts of interest with the user being able to access the documents through this more fine-grained grouping of Argentina-labeled documents. Comparing with Average Distributions Consider a conditional proportion of the form Fk(D, x | k) f , the distribution over K of all documents labeled with some concept k (not necessarily in K). It is natural to expect that this distribution would be similar to other distributions of this form over conditioning events k′ that are siblings of k. When they differ substantially it is a sign that the documents labeled with the conditioning concept k may be of interest. To facilitate this kind of comparison of concept-labeled documents with the aver-age of those labeled with the concept and its siblings, a user can specify two internal nodes of the hierarchy and compare individual distributions of concepts under one of the nodes conditioned on the concept set under the other node – that is, compute D(Fn(x | k)||An(x | n′)) for each k that is a child of n ′. In addition to their value in finding possible interesting concept labelings, com-parisons of this type also provide a hierarchical browsing mechanism for concept co-occurrence distributions. For example, an analyst interested in studying the topic distribution in articles dealing with G8 countries may first browse the average class distribution for G8. This might reveal the major topics that are generally common for G8 countries. Then, an additional search could be used to reveal the major char-acteristics specific for each country. Comparing Specific Distributions The preceding mechanism for comparing distributions with an average distribution is also useful for comparing conditional distributions of two specific nodes in the hierarchy. For example, one could measure the distance from the average topic dis-tribution of Arab League countries to the average topic distribution of G8 countries. An answer set could be returned from a query into a table with countries sorted in decreasing order of their contribution to the distance (second column) – namely d(Atopics(K | Arab League) || Atopics(k | G8)). Additional columns could show, respectively, the percentage of the topic in the average topic distribution of the Arab League countries (Atopics(x | G8)) and in the average topic distribution of the G8 countries (Atopics(x | G8)). One could also show the total number of articles in which the topic appears with any Arab League country and any G8 country. This would reveal the topics with which Arab League countries are associated much more than G8 countries such as grain, wheat, and crude oil. Finally, one could show the comparison in the opposite direction, revealing the topics with which G8 countries are highly associated relative to the Arab League. II.1.2 Frequent and Near Frequent Sets Frequent Concept Sets In addition to proportions and distributions, another basic type of pattern that can be derived from a document collection is a frequent concept set. This is defined as 24 Core Text Mining Operations a set of concepts represented in the document collection with co-occurrences at or above a minimal support level (given as a threshold parameter s; i.e., all the concepts of the frequent concept set appear together in at least s documents). Although origi-nally defined as an intermediate step in finding association rules (see Section II.1.3), frequent concept sets contain a great deal of information of use in text mining. The search for frequent sets has been well treated in data mining literature, stemming from research centered on investigating market basket–type associations first published by Agrawal et al. in 1993. Essentially, a document can be viewed as a market basket of named entities. Discovery methods for frequent concept sets in text mining build on the Apriori algorithm of Agrawal et al. (1993) used in data mining for market basket association problems. With respect to frequent sets in natural language application, support is the number (or percent) of documents containing the given rule – that is, the co-occurrence frequency. Confidence is the percentage of the time that the rule is true. L1 = {large 1 −itemsets} for (k = 2; Lk−1 ̸= Ø; k ++) do begin Ck = apriori-gen (Lk−1) // new candidates forall transactions t ∈D do begin C1 = subset (Ck, t) // candidates contained in t forall candidates c ∈Ct do c.count ++; end Lk = {c ∈Ck | c.count ≥minsupport} end Answer =  k Lk; Algorithm II.1: The Apriori Algorithm (Agrawal and Srikant 1994)2 A frequent set in text mining can be seen directly as a query given by the conjunc-tion of concepts of the frequent set. Frequent sets can be partially ordered by their generality and hold the simple but useful pruning property that each subset of a frequent set is a frequent set. The discovery of frequent sets can be useful both as a type of search for patterns in its own right and as a preparatory step in the discovery of associations. Discovering Frequent Concept Sets As mentioned in the previous section, frequent sets are generated in relation to some support level. Because support (i.e., the frequency of co-occurrence) has been by convention often expressed as the variable σ, frequent sets are sometimes also referred to as σ-covers, or σ-cover sets. A simple algorithm for generating frequent sets relies on incremental building of the group of frequent sets from singleton σ-covers, to which additional elements that continue to satisfy the support constraint 2 In data mining, the expression item is commonly used in a way that is roughly analogous to the expression feature in text mining. Therefore, the expression item set can be seen here, at least, as analogous to the expression concept set. II.1 Core Text Mining Operations 25 are progressively added. Algorithm II.2 is a typical algorithm for discovering frequent concept sets. L1 = {{A} | A ∈R and [A] ≥σ} i = 1 While Li ̸= Ø do Li+1 = {S1 ∪S2 | S1, S2 ∈Li, | S1 ∪S2 | = i + 1, all subsets of S1 ∪S2 are in Li} i = i + 1 end do return ({X | X ∈ i Li and |[X]| ≥σ}) Algorithm II.2: Algorithm for Frequent Set Generation Near Frequent Concept Sets Near frequent concept sets establish an undirected relation between two frequent sets of concepts. This relation can be quantified by measuring the degree of overlapping, for example, on the basis of the number of documents that include all the concepts of the two concept sets. This measure can be regarded as a distance function between the concept sets. Several distance functions can be introduced (e.g., based on the cosine of document vectors, Tanimoto distance, etc.). Directed relations between concept sets can also be identified. These are consid-ered types of associations (see Section II.1.3). II.1.3 Associations A formal description of association rules was first presented in the same research on “market basket” problems that led to the identification of frequent sets in data mining. Subsequently, associations have been widely discussed in literature on knowledge discovery targeted at both structured and unstructured data. In text mining, associations specifically refer to the directed relations between concepts or sets of concepts. An association rule is generally an expression of the form A ⇒B, where A and B are sets of features. An association rule A ⇒B indicates that transactions that involve A tend also to involve B. For example, from the original market-basket problem, an association rule might be 25 percent of the transactions that contain pretzels also contain soda; 8 percent of all transactions contain both items. In this example, 25 percent refers to the confidence level of the association rule, and 8 percent refers to the rule’s level of support. Withrespecttoconceptsets,associationruleA⇒B,relatingtwofrequentconcept setsAandB,canbequantifiedbythesetwobasicmeasuresofsupportandconfidence. Confidence is the percentage of documents that include all the concepts in B within the subset of those documents that include all the concepts in A. Support is the percentage (or number) of documents that include all the concepts in A and B. More precisely, we can describe association rules as follows: ■Let r = {t1, . . . , tn} be a collection of documents, each labeled with some subset of concepts from the m-concept set R = {I1, I2, . . . , Im}. 26 Core Text Mining Operations ■Given a concept A and document t, we write t(A) = 1 if A is one of the concepts labeling t, and t(A) = 0 otherwise. ■If W is a subset of the concepts in R, t(W) = 1 represents the case that t(A) = 1 for every concept A ∈W. ■Given a set X of concepts from R, define (X) = {i | ti(X) = 1}; (X) is the set of all documents ti that are labeled (at least) with all the concepts in X. ■Given some number σ (the support threshold), X is called a σ-covering if |(X)| ≥σ. W ⇒B is an association rule over r if W ⊆R and B ⊆R\W. We refer to W as the left-hand side (LHS) of the association and B as the right-hand side (RHS). Finally, we say that r satisfies W ⇒B with respect to 0 < γ ≤1 (the confidence threshold) and σ (the support threshold) if W ∪B is a σ-covering (i.e., |(W ∪B)| ≥σ and |(W ∪B)|/|(W)| ≥γ ). Intuitively, this means that, of all documents labeled with the concepts in W, at least a proportion γ of them are also labeled with the concepts in B; further, this rule is based on at least σ documents labeled with all the concepts in both W and B. For example, a document collection has documents labeled with concepts in the following tuples: {x, y, z, w}, {x, w}, {x, y, p}, {x, y, t}. If γ = 0.8 and σ = 0.5, and {x}, {y}, {w}, {x, w}, and {x, y} are coverings, then {y} ⇒{x} and {w} ⇒{x} are the only associations. Discovering Association Rules The discovery of association rules is the problem of finding all the association rules with a confidence and support greater than the user-identified values minconf (i.e., γ , or the minimum confidence level) and minsup (i.e., σ, or the minimum support level) thresholds. The basic approach to discovering associations is a generally straightforward two-step process as follows: ■Find all frequent concept sets X (i.e., all combinations of concepts with a support greater than minsup); ■Test whether X \B ⇒B holds with the required confidence. The first step – namely the generation of frequent concept sets (see Algorithm II.2) – has usually been found to be by far the most computationally expensive operation. A typical simple algorithm for the second step – generating associations (after the generation of maximal frequent concept sets has been com-pleted) – can be found below in Algorithm II.3. foreach X maximal frequent set do generate all the rules X \ {b} ⇒{b}, where b ∈X, such that [X{b} ] [X ] ≥σ endfch Algorithm II.3: Simple Algorithm for Generating Associations (Rajman and Besancon 1998) II.1 Core Text Mining Operations 27 Thus,essentially,if{w,x}and{w,x,y,z}arefrequentconceptsets,thentheassociation rule {w, x} ⇒{y, z} can be computed by the following ratio: c = support ({w, x, y, z}) support ({w, x}) . Again, however, in this case the association rule will only hold if c ≥σ. Given these steps, if there are m concepts in a document collection, then, in a single pass, all possible 2m subsets for that document collection can be checked. Of course, in extremely large, concept-rich document collections, this can still be a nontrivial computational task. Moreover, because of the implications of generat-ing an overabundance of associations, additional procedures – such as structural or statistical pruning, redundancy elimination, and so on – are sometimes used to sup-plement the main association rule extraction procedure in order to limit the number of generated associations. Maximal Associations Association rules are very useful in helping to generally describe associations rel-evant between concepts. Maximal associations represent a more specialized type of relationship between concepts in which associations are identified in terms of their relevance to one concept and their lack of relevance to another. These asso-ciations help create solutions in the particular problem space that exists within text document collections, where closely related items frequently appear together. Con-ventional association rules fail to provide a good means for allowing the specific discovery of associations pertaining to concepts that most often do not appear alone (but rather together with closely related concepts) because associations relevant only to these concepts tend to have low confidence. Maximal association rules provide a mechanism for discovering these types of specialized relations. For example, in a document collection, the concept “Saddam Hussein” may most often appear in association with “Iraq” and “Microsoft” most often with “Windows.” Because of the existence of these most common relationships, associations especially relevant to the first concept in the association, but not the other, will tend to have low confidence. For instance, an association between “Iraq” and the “Arab League” would have low confidence because of the many instances in which “Iraq” appears with “Saddam Hussein” (and not “Arab League”). Likewise, an association between “Microsoft” and “Redmond” would potentially be left unidentified because of the many more instances in which “Microsoft” appears with “Windows.” Maximal asso-ciations identify associations relevant to one concept but not the other – that is, associations relating to “Iraq” or “Microsoft” alone. Maximal Association Rules: Defining M-Support and M-Confidence Fundamentally, a maximal association rule X max ⇒Y states that, whenever X is the only concept of its type in a transaction (i.e., when X appears alone), then Y also appears with some confidence. To understand the notion of a maximal association rule it is important define the meaning of alone in this context. We can do so with respect to categories of G: Definition II.8. Alone with Respect to Maximal Associations: For a transaction t, a category g, and a concept-set X ⊆gi, one would say that X is alone in t if t ∩gi = X. 28 Core Text Mining Operations That is, X is alone in t if X is the largest subset of gi that is in t. In such a case, one would say that X is maximal in t and that t M-supports X. For a document collection D, the M-support of X in D, denoted as s max D (X), is the number of transactions t ∈D that M-support X. A maximal association rule, or M-association, is a rule of the form X max ⇒Y, where X and Y are subsets of distinct categories that could be identified as g (X) and g (Y), respectively. The M-support for the maximal association X max ⇒Y, which can be denoted as s max D (X max ⇒Y), can be defined as s max D (X max ⇒Y) = |{t : t M-supports X and t supports Y}|. That is, (X max ⇒Y) is equal to the number of transactions in D that M-support X and also support Y in the conventional sense, which suggests that, whenever a transaction M-supports X, then Y also appears in the transaction with some probability. In measuring this probability, we are generally interested only in those transac-tions in which some element of g(Y) (i.e., the category of Y) appears in the transac-tion. Thus, we define confidence in the following manner. If D(X, g(Y)) is the subset of the document collection D consisting of all the transactions that M-support X and contain at least one element of g(Y), then the M-confidence of the rule X max ⇒Y, denoted by c max D (X max ⇒Y), is c max D (X max ⇒Y) = s max D (X max ⇒Y) |D(X, g(Y))| . AtextminingsystemcansearchforassociationsinwhichtheM-supportishigherthan some user-specified minimum M-support, which has been denoted by the designation s, and the M-confidence is higher than some user-specified minimum M-confidence, which has been denoted by c. A set X that has M-support of at least s is said to be M-frequent. M-Factor Any maximal association rule is also a conventional association with perhaps differ-ent levels of support and confidence. The M-factor of the rule X max ⇒Y is the ratio between the M-confidence of the maximal association X max ⇒Y and the confidence of the corresponding conventional association X ⇒Y. Specifically, if D is a subset of the transaction that contains at least one concept of g(Y), then, the M-factor of the association X max ⇒Y is M-factor (X max ⇒Y) = c max D (X max ⇒Y) = c max D (X max ⇒Y) cD′(X ⇒Y) . Here, the denominator is the confidence for the rule X ⇒Y with respect to D′. This is because, given that the M-confidence is defined with respect to D′, the comparison to conventional associations must also be with respect to the set. From a practical perspective, one generally seeks M-associations with a higher M-factor. Such M-associations tend to represent more interesting rules. II.1 Core Text Mining Operations 29 II.1.4 Isolating Interesting Patterns The notion of interestingness with respect to knowledge discovery in textual data has been viewed from various subjective and contextual perspectives. The most common method of defining interestingness in relation to patterns of distributions, frequent sets, and associations has been to enable a user to input expectations into a system and then to find some way of measuring or ranking patterns with respect to how far they differ from the user’s expectations. Text mining systems can quantify the potential degree of “interest” in some piece of information by comparing it to a given “expected” model. This model then serves as a baseline for the investigated distribution. For example, a user may want to compare the data regarding Microsoft with an averaged model constructed for a group of computer software vendors. Alternatively, a user may want to compare the data relating to Microsoft in the last year with a model constructed from the data regarding Microsoft in previous years. Interestingness with Respect to Distributions and Proportions Becausetextminingsystemsrelyonconceptproportionsanddistributionstodescribe the data, one therefore requires measures for quantifying the distance between an investigated distribution and another distribution that serves as a baseline model (Feldman, Dagan, and Hirsh 1998). So long as the distributions are discrete, one can simply use sum-of-squares to measure the distance between two models: D(p′ || p) = x (p′(x) −p(x))2, where the target distribution is designated by p and the approximating distribution by p′ and the x in the summation is taken over all objects in the domain. This measure is always nonnegative and is 0 if and only if p′ = p. Given this measure, one can use it as a heuristic device. With respect to distribution-based patterns, this could be used as a heuristic for judging concept-distribution similarities. This measure is referred to as concept distribution distance. Definition II.9. Concept Distribution Distance: Given two concept distributions P ′K(x) and PK(x), the distance D(P ′K || PK) between them is defined by D(P ′ K(x) || PK(x)) =  x∈K (P ′ K(x)−PK(x))2. Text mining systems are also sometimes interested in the value of the difference between two distributions at a particular point. This measure is called concept pro-portion distance. Definition II.10. Concept Proportion Distance: Given two concept distributions P ′K(x) and PK(x), and a concept k in K, the distance d(P ′ K(k) || PK(k)) between them is defined by D(P′ K(k) || PK(k)) = P′ K(k) −PK(k). Thus, another way to state D(P ′K || PK) would be x∈K [d(PK(x) || PK(x))] 2. 30 Core Text Mining Operations As an example, the distance between the distribution of topics within Argentina and the distribution of topics within Brazil would be written as D(Ftopics (x | Argentina) || Ftopics(x | Brazil)), and the distance between the distribution of topics within Argentina and the average distribution of topics within South America would be written as D(Ftopics(x | Argentina) || Atopics(x | South−America)). II.1.5 Analyzing Document Collections over Time Early text mining systems tended to view a document collection as a single, mono-lithic entity – a unitary corpus consisting of one coherent and largely static set of textual documents. Many text mining applications, however, benefit from viewing the document collection not as a monolithic corpus but in terms of subsets or divi-sions defined by the date and time stamps of documents in the collection. This type of view can be used to allow a user to analyze similarities and differences between con-cept relationships across the various subdivisions of the corpus in a way that better accounts for the change of concept relationships over time. Trend analysis, in text mining, is the term generally used to describe the analysis of concept distribution behavior across multiple document subsets over time. Other time-based analytics include the discovery of ephemeral associations, which focuses on the influence or interaction of the most frequent or “peak” concepts in a period on other concepts, and deviation, which concentrates on irregularities such as documents that have concepts differing from more typical documents in a document collection (or subcollection) over time. In addition, text mining systems can enable users to explore the evolution of concept relationships through temporal context graphs and context-oriented trend graphs. Although trend analysis and related time-based analytics attempt to better account for the evolving nature of concept relationships in a document collection, text mining systems have also developed practical approaches to the real-world chal-lenges inherent in supporting truly dynamic document collections that add, mod-ify, or delete documents over time. Such algorithms have been termed incremen-tal algorithms because they tend to be aimed at more efficient incremental update of the search information that has already been mined from a document collec-tion to account for new data introduced by documents added to this collection over time. Both trend analysis and incremental algorithms add a certain dynamism to text mining systems, allowing these systems to interact with more dynamic document collections. This can be critical for developing useful text mining applications targeted at handling time series–type financial reports, topical news feeds, text-based market data, time-sensitive voter or consumer sentiment commentary, and so on. Trend Analysis The origin of the problem of discovering trends in textual data can be traced to research on methods for detecting and presenting trends in word phrases. These methods center on a two-phase process in which, in the first phase, phrases are created as frequent sequences of words using the sequential patterns mining algorithm first II.1 Core Text Mining Operations 31 mooted for mining structured databases and, in the second phase, a user can query the system to obtain all phrases whose trend matches a specified pattern (i.e., “recent upward trend”). More recent methods for performing trend analysis in text mining have been predicated on the notion that the various types of concept distributions are functions of document collections. It is therefore possible to compare two distributions that are otherwise identical except that they are for different subcollections. One notable example of this is having two collections from the same source (such as from a news feed) but from different points in time. For instance, one can compare the distribution of topics within Argentina-labeled documents, as formed by documents published in the first quarter of 1987, with the same distribution formed by documents from the second quarter of 1987. This comparison will highlight those topics whose proportion changed between the two time points, directing the attention of the user to specific trends or events in these topics with respect to Argentina. If R1 is used to designate a portion of a Reuters newswire data collection from the first quarter of 1987, and R2 designates the portion from the second quarter of 1987, this would correspond to comparing Ftopics(R1, x | Argentina) and Ftopics(R2, x | Argentina). This knowledge discovery operation can be supplemented by listing trends that were identified across different quarters in the time period represented by the Reuters collection by computing R(Fcountries(R1, x | countries) || Fcountries(R2, x | countries)), where R1 and R2 correspond to different subcollections from different quarters.3 A text mining system could also calculate the percentage and absolute frequency for Fcountries(x | countries) for each such pair of collections. Ephemeral Associations An ephemeral association has been defined by Montes-y-Gomez et al. (2001b) as a direct or inverse relation between the probability distributions of given topics (concepts) over a fixed time span. This type of association differs notionally from the more typical association form A ⇒B because it not only indicates the co-occurrence of two topics or sets of topics but primarily indicates how these topics or sets of topics are related within the fixed time span. Examples of ephemeral associations can be found in news feeds in which one very frequently occurring or “peak” topic during a period seems to influence either the emergence or disappearance of other topics. For instance, news stories (documents) aboutacloseelectionthatinvolveallegationsofelectionmachinefraudmaycorrelate with the emergence of stories about election machine technology or vote fraud stories from the past. This type of ephemeral association is referred to as a direct ephemeral association. On the other hand, news stories relating to the victory of a particular tennis player in a major tournament may correlate with a noticeable and timely decrease in stories mentioning other tennis players who were formerly widely publicized. Such 3 It would also be quite fair to ask for a distribution FK(x | K), which analyzes the co-occurrences of different keywords under the same node of the hierarchy. Thus, for example, Fcountries(x | countries) would analyze the co-occurrences of country labels on the various documents. 32 Core Text Mining Operations momentary negative influence between one topic and another is referred to as an inverse ephemeral association. One statistical method suggested by Montes-y-Gomez et al. (2001b) to detect ephemeral associations has been the correlation measure r. This method has been expressed as r = S01 √S00S01 , Skl = n i=1  pi k, pi l  −1 n  n i=1 pi k   n i=1 pi l  , k,l = 0, 1. Within this method, pi 0 is the probability of the peak topic, and pi 1 is the probability of the other topic in the period i. The correlation coefficient r attempts to measure how well two variables – here, topics or concepts – are related to one another. It describes values between −1 and 1; the value −1 means there is a perfect inverse relationship between two topics, whereas the value 1 denotes a perfect direct relationship between two topics. The value 0 indicates the absence of a relation. Deviation Detection Users of text mining systems are sometimes interested in deviations – that is, the identification of anomalous instances that do not fit a defined “standard case” in large amounts of data. The normative case is a representation of the average ele-ment in a data collection. For instance, in news feed documents and the topics (con-cepts) that they contain, a particular topic can be considered a deviation if its prob-ability distribution greatly diverges from distributions of other topics in the same sample set. Research into deviation detection for text mining is still in its early, formative stages, and we will not discuss it in detail here. However, work has been done by Montes-y-Gomez, Gelbukh, and Lopez-Lopez (Montes-y-Gomez et al. 2001b) and others to examine the difficult task of detecting deviations among documents in large collections of news stories, which might be seen as an application of knowledge discovery for distribution-type patterns. In such applications, time can also be used as an element in defining the norm. In addition, one can compare norms for various time-based subsets of a document collection to find individual news documents whose topics substantially deviate from the topics mentioned by other news sources. Sometimes such deviating individual documents are referred to as deviation sources. From Context Relationships to Trend Graphs Another approach to to exploring the evolution of concept relationships is to exam-ine temporal context relationships. Temporal context relationships are most typi-cally represented by two analytical tools: the temporal context graph and the trend graph. Before describing these time-based, context-oriented analytical tools, we expend a little effort explicating the more general notions of context in document collections. II.1 Core Text Mining Operations 33 Indeed, both temporal context graphs and trend graphs build on the notion of the context relationship and its typical visual representation in the form of the context graph. Context Phrases and Context Relationships Generally, a context relationship in a document collection is the relationship within a set of concepts found in the document collection in relation to a separately specified concept (sometimes referred to as the context or the context concept). A context relationship search might entail identifying all relationships within a set of company names within the context of the concept “bankruptcy.” A context phrase is the name given to a subset of documents in a document collection that is either labeled with all, or at least one, of the concepts in a specified set of concepts. Formal definitions for both context phrases and the context relationship are as follows: Definition II.11. Context Phrase: If D is a collection of documents and C is a set of concepts, D/A( C) is the subset of documents in D labeled with all the concepts in C, and D/O( C) is the subset of documents in D labeled with at least one of the concepts in C. Both A( C) and O( C) are referred to as context phrases. Definition II.12. Context Relationship: If D is a collection of documents, c1 and c2 are individual concepts, and P is a context phrase, R(D, c1, c2 | P) is the number of documents in D/P which include both c1 and c2. Formally, R(D, c1, c2 | P) = |(D/A({c1, c2}))|P|. The Context Graph Context relationships are often represented by a context graph, which is a graphic rep-resentation of the relationship between a set of concepts (e.g., countries) as reflected in a corpus respect to a given context (e.g., crude oil). A context graph consists of a set of vertices (also sometimes referred to as nodes) and edges. The vertices (or nodes) of the graph represent concepts. Weighted “edges” denote the affinity between the concepts. Each vertex in the context graph signifies a single concept, and two concepts are connected by an edge if their similarity, with respect to a predefined similarity func-tion, is larger than a given threshold (similarity functions in graphing are discussed in greater detail in Chapter X). A context graph is defined with respect to a given context, which determines the context in which the similarity of concepts is of interest (see Figure II.1). A context graph also has a formal definition: Definition II.13. Context Graph: If D is a collection of documents, C is a set of con-cepts, and P is a context phrase, the concept graph of D, C, P is a weighted graph G = (C, E), with nodes in C and a set of edges E = ({c1, c2} | R(D, c1, c2 | P) > 0). For each edge, {c1, c2} ∈E, one defines the weight of the edge, w{c1, c2} = R(D, c1, c2 | P). nynec corp mca inc seagram co. ltd ford motor co. general motor corp. chrysler corp. compuserve inc america online inc bertelsmann aq international business machine corp. prodigy service co. apple computer inc microsoft corp boeing co general electric co h r block inc dow coming inc dow coming corp coming inc dow chemical co inc dow chemical inc deutsche telekom ag france telecom sa sprint corp mci communication corp news corp ltd. bell atlantic corp 21 25 29 20 16 27 27 36 35 35 38 34 20 21 15 24 20 24 22 29 26 22 31 19 16 16 40 29 30 23 19 15 20 Figure II.1. Context graph for companies in the context of “joint venture.” (From Feldman, Fresko, Hirsh, et al. 1998.) 34 II.1 Core Text Mining Operations 35 It is often useful to be able to examine not just concept relationships within a given concept context but also to analyze the similarities and differences in context rela-tionships across different temporal segments of the corpus. A temporal context rela-tionship refers specifically to the relationship between a set of concepts, as reflected across these segments (identified by individual document date and time stamps) with respect to specified contexts over time. For investigation across segments, a selected subset of documents must be created that constitute a given temporal “segment” of the document collection as a whole. Definition II.14. Temporal Selection (“Time Interval”): If D is a collection of doc-uments and I is a time range, date range, or both, DI is the subset of documents in D whose time stamp, date stamp, or both, is within I. The resulting selection is sometimes referred to as the time interval. The formal definition for temporal context relationship builds on both this defi-nition and that supplied earlier for a generic concept relationship (Definition II.12). Definition II.15. Temporal Context Relationship: If D is a collection of documents, c1 and c2 are individual concepts, P is a context phrase, and I is the time interval, then RI(D, c1, c2 | P) is the number of documents in DI in which c1 and c2 co-occur in the context of P – that is, RI(D, c1, c2 | P) is the number of DI/P that include both c1 and c2. A temporal context graph, then, can be defined as follows: Definition II.16. Temporal Context Graph: If D is a collection of documents, C is a set of concepts, P is a context phrase, and I is the time range, the temporal concept graph of D, C, P, I is a weighted graph G = (C, EI) with set nodes in C and a set of edges EI, where EI = ({c1, c2} | R(D, c1, c2 | P) > 0). For each edge, {c1, c2} ∈E, one defines the weight of the edge by wI{c1, c2} = RI(D, c1, c2 | P). The Trend Graph A trend graph is a very specialized representation that builds on the temporal context graph as informed by the general approaches found in trend analysis. A trend graph can be obtained by partitioning the entire timespan covered by a time- or date-stamped document collection, or both, into a series of consecutive time intervals. These intervals can then be used to generate a corresponding sequence of temporal context graphs. This sequence of temporal context graphs can be leveraged to create combined or cumulative trend graphs that display the evolution of concept relationships in a given context by means of visual cues such as the character and relative weight of edges in the graph. For instance, several classes of edges may be used to indicate various conditions: ■New Edges: edges that did not exist in the previous graph. ■Increased Edges: edges that have a relatively higher weight in relation to the previous interval. 36 Core Text Mining Operations ■Decreased Edges: edges that have a relatively decreased weight than the previous interval. ■Stable Edges: edges that have about the same weight as the corresponding edge in the previous interval. Handling Dynamically Updated Data There are many situations in which the document collection for a text mining system might require frequent – perhaps even constant – updating. This regularly occurs in environments in which the maintenance of data currency is at a premium such as when a user wants iteratively run searches on topical news, time-sensitive financial information, and so on. In such situations, there is a need for documents to be added dynamically to the document collection and a concurrent need for a user of the text mining system always – that is to say, at every instance of a new document’s being added to the collection – to know the full and current set of patterns for the searches that he or she has run. An obvious solution is simply to rerun the search algorithm the user is employing from scratch whenever there is a new data update. Unfortunately, this approach is computationally inefficient and resource intensive (e.g., I/O, memory capacity, disk capacity), resulting in unnecessary performance drawbacks. Additionally, users of text mining systems with large document collections or frequent updates would have to endure more significant interruptions in their knowledge mining activities than if a quicker updating mechanism employing methods of modifying search results on an increment-by-increment basis were implemented. The more useful and sophisticated approach is to leverage knowledge from previ-oussearchrunsasafoundationtowhichnewinformationcanbeaddedincrementally. Several algorithms have been described for handling the incremental update situ-ations in data mining, and these algorithms also have applicability in text mining. These include the FUP, FUP2, and Delta algorithms, which all attempt to minimize the recomputation required for incremental updating of Apriori-style, frequent set, and association rule search results. Another algorithm, based on the notion of bor-der sets in data mining, however, also offers a very efficient and robust mechanism for treating the incremental case when dealing with discovered frequent sets and associations from natural language documents. The Borders Incremental Text Mining Algorithm The Borders algorithm can be used to update search pattern results incrementally. It affords computational efficiency by reducing the number of scans for relations, reducing the number of candidates, and then performing no scan if there is no fre-quent set. This algorithm is also robust because it supports insertions and deletions as well as absolute and percentage-based thresholds. The Borders algorithm is based on the notions of border sets and negative borders. In a sense, a border set can be seen as a notion related to that of a frequent set and may be defined as follows: Definition II.17. Border Set: X is a border set if it is not a frequent set, but any proper subset Y ⊂X is a frequent set (see also Figure II.2). II.1 Core Text Mining Operations 37 R = {(a,b,c), (a,b,d), (a,c), (b,c)} s = 2 a b c a,b a,c b,c a,d b,d c,d a,b,c,d a,b,c b,c,d a,b,d a,c,d d Figure II.2. Illustration of border sets. The full benefit of the Borders algorithm can be appreciated when one attempts to accommodate incremental data updates of association rules. The Apriori algorithm for generating associations entails two main steps, beginning with the discovery of frequent sets through multiple scans of relations. This “first-step” search for frequent sets is very often the most computationally expensive part of association discovery. For each of the relation scans, a set of candidates is assembled and, during each scan, the support of each candidate is computed. The Borders algorithm functions initially to reduce the number of relation scans. Generally this serves to reduce the number of candidates. In addition, the algorithm does not perform a scan if no frequent set is identified. Some important notational elements for discussing of the Borders algorithm are described below. ■Concept set A = {A1, . . . , Am} ■Relations over A: Rold: old relation Rinc: increment Rnew: new combined relation ■s(X/R): support of concept set X in the relation R ■s : minimum support threshold (minsup). The Borders algorithm also makes use of two fundamental properties. ■Property 1: if X is a new frequent set in Rnew, then there is a subset Y ⊆X such that Y is a promoted border. ■Property 2: if X is a new k-sized frequent set in Rnew, then for each subset Y ⊆ X of size k −1, Y is one of the following: (a) a promoted border, (b) a frequent set, or (c) an old frequent set with additional support in Rinc. The Borders algorithm itself can be divided into two stages. 38 Core Text Mining Operations R = {(a,b,c), (a,b,d), (a,c), (b,c)} s = 2, add: (a,b,d) a b c a,b a,c b,c a,d b,d c,d a,b,c,d a,b,c a,b,d a,c,d d b,c,d Figure II.3. Illustration of promoted borders and new borders. ■Stage 1: Finding Promoted Borders and Generating Candidates. Maintain the support for all borders and frequent sets. When new data arrive for each border B of Rold, Compute s(B, Rinc) s(B, Rnew) = s(B, Rold) + s(B, Rinc) If s(B, Rnew) ≥s∗, then B is a promoted border. If a promoted border does exist, Run an Apriori-like algorithm, and Generate candidates using the Property 1 and Property 2. ■Stage 2: Processing Candidates. L0 = PB(1), i = 1 Although (L1 ̸= Ø or i ≤the largest promoted border) Candidates (I + 1) = {X | |X| = i + 1 ∃ Y ⊂X, |Y| = 1, Y ∈PB(i) ∪Li ∀ Z ⊂X, |Z| = 1, Z ∈PB(i) ∪F(i) ∪Li} Scan relation and compute s(X, Rnew) for each candidate X Li+1 = {X candidate: s(X, Rnew) ≥s }. See Figure II.3 for an illustration of promoted borders. With the Borders algorithm, full relations are never scanned if there is no new frequent set. Moreover, because of its parsimony in scanning for relations, the algorithm is likely to yield a small candidate set. Percentage thresholds can be incorporated into incremental update schemes for text mining systems in conjunction with the Borders algorithm. For instance, we can define a threshold as σ percent of the size of the relations, and thus S ∗= σ|R|. The key point for this type of operation is to redetermine the type of each set according to the new threshold before running the algorithm. II.1 Core Text Mining Operations 39 Deletions with absolute thresholds for incremental data can be accommodated relatively straightforwardly: s(X, Rnew) = s(X, Rold) −s(X, Rinc). For percentage-type thresholds, the approach to handling deletions is perhaps a bit less intuitive but not too complex. In these cases, one can simply look at a deletion as a decrease in the absolute threshold and approach the deletion with the following equation: s∗ new = σ(|Rold| + |Rinc|) = s∗ old −s∗ inc. General changes to the threshold value should also be generally supported. Increas-ing the threshold is relatively easy, for only borders and frequent sets need be consid-ered. On the other hand, an approach to decreasing the threshold might be to view border X with s(B, Rnew) ≥s∗ new as a promoted border before running the Borders algorithm. II.1.6 Citations and Notes Section II.1.–II.1.1 The primary source leveraged for information throughout Section II.1 is Feldman, Dagan, et al. (1998). Although focused more on visualization, Hearst (1995) also pro-videssomeinterestinggeneralbackgroundforthetopic.DefinitionsII.1.throughII.7. derive from descriptions of distribution and proportion types identified in Feldman, Dagan, et al. (1998). Section II.1.2 Agrawal, Imielinski, and Swami (1993) and Agrawal and Srikant (1994) introduce the generation of frequent sets as part of the Apriori algorithm. Beyond Agrawal et al.’s seminal research on investigating market basket–type associations (Agrawal etal.1993),otherimportantworksshapingthepresent-dayunderstandingoffrequent concept sets include Agrawal and Srikant (1994) and Silverstein, Brin, and Motwani (1999). In addition, Clifton and Cooley (1999) provides a useful treatment of market basket problems and describes how a document may be viewed as a market basket of named entities. Feldman, Aumann, Amir, et al. (1997); Rajman and Besancon (1997b); and Rajman and Besancon (1998) discuss the application of elements of the Apriori algorithm to textual data. Algorithm 1 in Section II.1.2. was taken from Agrawal and Srikant (1994). Rajman and Besancon (1997b) provides the background for Section II.1.2.’s dis-cussion of the discovery of frequent concept sets. Although Algorithm 2 in Section II.1.2 is a generalized and simple one for frequent set generation based on the notions set forth in Agrawal et al. (1993) and Agrawal and Srikant (1994), Rajman and Besan-con (1997b) provides a slightly different but also useful algorithm for accomplishing the same task. 40 Core Text Mining Operations Section II.1.3 In addition to presenting the framework for generating frequent sets, the treatment of the Apriori algorithm by Agrawal et al. (1993) also provided the basis for generating associations from large (structured) data sources. Subsequently, associations have been widely discussed in literature relating to knowledge discovery targeted at both structured and unstructured data (Agrawal and Srikant 1994; Srikant and Agrawal 1995; Feldman, Dagan, and Kloesgen 1996a; Feldman and Hirsh 1997; Feldman and Hirsh 1997; Rajman and Besancon 1998; Nahm and Mooney 2001; Blake and Pratt 2001; Montes-y-Gomez et al. 2001b; and others). The definitions for association rules found in Section II.1.3. derive primarily from Agrawal et al. (1993), Montes-y-Gomez et al. (2001b), Rajman and Besancon (1998), and Feldman and Hirsh (1997). Definitions of minconf and minsup thresholds have been taken from Montes-y-Gomez et al. (2001b) and Agrawal et al. (1993). Rajman and Besancon (1998) and Feldman and Hirsh (1997) both point out that the discovery of frequent sets is the most computationally intensive stage of association generation. The algorithm example for the discovery of associations found in Section II.3.3’s Algorithm 3 comes from Rajman and Besancon (1998); this algorithm was directly inspired by Agrawal et al. (1993). The ensuing discussion of this algorithm’s implica-tionswasinfluencedbyRajmanandBesancon(1998),Feldman,Dagan,andKloesgen (1996a), and Feldman and Hirsh (1997). Maximal associations are most recently and comprehensively treated in Amir et al. (2003), and much of the background for the discussion of maximal associations in Section II.1.3 derives from this source. Feldman, Aumann, Amir, et al. (1997) is also an important source of information on the topic. The definition of a maximal association rule in Section II.1.3, along with Definition II.8 and its ensuing discussion, comes from Amir, Aumann, et al. (2003); this source is also the basis for Section II.1.3’s discussion of the M-factor of a maximal association rule. Section II.1.4 Silberschatz and Tuzhilin (1996) provides perhaps one of the most important discus-sions of interestingness with respect to knowledge discovery operations; this source has influenced much of Section II.1.5. Blake and Pratt (2001) also makes some gen-eral points on this topic. Feldman and Dagan (1995) offers an early but still useful discussion of some of the considerations in approaching the isolation of interesting patterns in textual data, and Feldman, Dagan, and Hirsh (1998) provides a useful treatment of how to approach the subject of interestingness with specific respect to distributions and proportions. Definitions II.9 and II.10 derive from Feldman, Dagan, and Hirsh (1998). Section II.1.5 Trend analysis in text mining is treated by Lent et al. (1997); Feldman and Dagan (1995); Feldman, Dagan, and Hirsh (1998); and Montes-y-Gomez et al. (2001b). Montes-y-Gomez et al. (2001b) offers an innovative introduction to the notions of ephemeral associations and deviation detection; this is the primary recent source for information relating to these two topics in Section II.1.5. The analysis of sequences and trends with respect to knowledge discovery in struc-tured data has been treated in several papers (Mannila, Toivonen, and Verkamo 1995; II.2 Using Background Knowledge for Text Mining 41 Srikant and Agrawal 1996; Keogh and Smyth 1997; Bettini, Wang, and Joiodia 1996; Mannila et al. 1995; and Mannila, Toivonen, and Verkamo 1997). Algorithms based on the identification of episodes (Mannila et al. 1995) and sequential patterns (Srikant and Agrawal 1996) in large data repositories have been described as mechanisms for better mining of implicit trends in data over time. Related work on the discovery of time series analysis has also been discussed (Agrawal and Srikant 1995; Keogh and Smyth 1997). Lent et al. (1997) and Feldman, Aumann, Zilberstein, et al. (1997) emphasize that trend analysis focused on text mining relates to collections of documents that can be viewed as subcollections defined, in part, by time. These two works are among the mostimportantentrypointsfortheliteratureoftrendanalysisintextmining.Montes-y-Gomez et al. (2001b) also makes very interesting contributions to the discussion of the topic. Definitions related to ephemeral associations come from Montes-y-Gomez et al. (2001b); the terms ephemeral association and deviation detection are used in this chapter within the general definitional context of this source. Use of the cor-relation measure r in the detection of ephemeral associations also comes from this source, building on original work found in Freund and Walpole (1990). Finally, the examples used to illustrate direct and inverse ephemeral associations are based on the discussions contained in Montes-y-Gomez et al. (2001b). The discussion of deviation detection in Section II.1.5 has been shaped by sev-eral sources, including Montes-y-Gomez et al. (2001b); Knorr, Ng, and Tucatov (2000); Arning, Agrawal, Raghavan (1996); Feldman and Dagan (1995); and Feldman, Aumann, Zilberstein, et al. (1997). Much of the terminology in this section derives from Montes-y-Gomez et al. (2001b). The term deviation sources was coined in Montes-y-Gomez et al. (2001b). Much of Section II.1.5’s discussion of context and trend graphs derives directly from Feldman, Aumann, Zilberstein, et al. (1997) as do Definitions II.11, II.12, II.13, II.14, II.15, and II.16. The trend graph described in Section II.3.5 has also, in a general way, been influenced by Lent et al. (1997). Feldman, Amir, et al. (1996) was an early work focusing on measures that would support a text mining system’s ability to handle dynamically updated data. The FUP incremental updating approach comes from Cheung et al. (1996), the FUP2 is for-malized in Cheung, Lee, and Kao (1997), and the Delta algorithms were identified in Feldman, Amir, et al. (1996). The notion of border sets was introduced, with respect to data mining, in Mannila and Toivonen (1996). Much of the discussion of border sets in this section is an application of the border set ideas of Mannila and Toivonen (1996) to collections of text documents. The Apriori algorithm for generating associations was identified in Agrawal et al. (1993) and Agrawal and Srikant (1994). II.2 USING BACKGROUND KNOWLEDGE FOR TEXT MINING II.2.1 Domains and Background Knowledge As has already been described in Section II.1, concepts derived from the representa-tions of documents in text mining systems belong not only to the descriptive attributes 42 Core Text Mining Operations of particular documents but generally also to domains. A domain can be loosely defined as a specialized area of interest for which formal ontologies, lexicons, and taxonomies of information may be created. Domains can exist for very broad areas of interest (e.g., economics or biology) or for more narrow niches (e.g., macroeconomics, microeconomics, mergers, acquisitions, fixed income, equities, genomics, proteomics, zoology, virology, immunology, etc.). Much of what has been written about the use of domain knowledge (also referred to as background knowledge) in classic data mining concerns its use as a mecha-nism for constraining knowledge discovery search operations. From these works, it is possible to generalize three primary forms of usable background knowledge from external sources for data mining applications: (a) constraints, (b) attribute relation-ship rules, and (c) “hierarchical trees” or “category domain knowledge.” More recent literature, however, suggests that other types and implementations of background knowledge may also be useful in data mining operations. Text mining systems, particularly those with some pronounced elements of domain specificity in their orientation, can leverage information from formal external knowledge sources for these domains to greatly enhance a wide variety of elements in their system architecture. Such elements include those devoted to preprocessing, knowledge discovery, and presentation-layer operations. Even text mining systems without pronounced elements of domain specificity in their design or usage, how-ever, can potentially benefit by the inclusion of information from knowledge sources relating to broad but still generally useful domains such as the English language or world almanac–type facts. Indeed, background knowledge can be used in text mining preprocessing oper-ations to enhance concept extraction and validation activities. Furthermore, access to background knowledge can play a vital role in the development of meaningful, consistent, and normalized concept hierarchies. Background knowledge, in addition, may be utilized by other components of a text mining system. For instance, one of the most clear and important uses of background knowledge in a text mining system is the construction of meaningful constraints for knowledge discovery operations. Likewise, background knowledge may also be used to formulate constraints that allow users greater flexibility when browsing large result sets or in the formatting of data for presentation. II.2.2 Domain Ontologies Text mining systems exploit background knowledge that is encoded in the form of domain ontologies. A domain ontology, sometimes also referred to less precisely as a background knowledge source or knowledge base, might be informally defined as the set of all the classes of interest and all the relations between these classes for a given domain. Perhaps another way of describing this is to say that a domain ontology houses all the facts and relationships for the domain it supports. Some see a grouping of facts and relationships as a vocabulary constructed in such a way as to be both understandable by humans and readable by machines. A more formal – albeit very generic – definition for a domain ontology can be attempted with the following notation proposed by Hotho et al. (2003) derived gen-erally from research into formal concept analysis: II.2 Using Background Knowledge for Text Mining 43 Definition II.18. Domain Ontology with Domain Hierarchy: A domain ontology is a tuple O:= (C, ≤c) consisting of a set C whose elements are called concepts and a partial order ≤c on C, which is labeled a concept hierarchy or taxonomy. One example of a real-world ontology for a broad area of interest can be found in WordNet, an online, public domain ontology originally created at Princeton Univer-sity that has been designed to model the domain of the English language. Version 1.7 of WordNet contains approximately 110,000 unique concepts (referred to as synsets by WordNet’s designers); the ontology also has a sophisticated concept hierarchy that supports relation-type information. WordNet can be used as a “terminological knowledge base” of concepts, con-cept types, and concept relations to provide broadly useful background knowledge relating to the domain of the English language. A WordNet synset represents a sin-gle unique instance of a concept meaning related to other synsets by some type of specified relation. Interestingly, WordNet also supports a lexicon of about 150,000 lexical entries (in WordNet’s terminology “words”) that might more generally be viewed as a list of lexical identifiers or “names” for the concepts stored in the WordNet ontology. Users of WordNet can query both its ontology and its lexicon. Another ontology implementation that models a narrower subject area domain is the Gene OntologyTM or GO knowledge base administered by the Gene Ontology Consortium.TheGOknowledgebaseservesasacontrolledvocabularythatdescribes gene products in terms of their associated biological processes, cellular components, and molecular functions. In this controlled vocabulary, great care is taken both to construct and define concepts and to specify the relationships between them. Then, the controlled vocabulary can be used to annotate gene products. GO actually comprises several different structured knowledge bases of infor-mation related to various species, coordinates, synonyms and so on. Each of these ontologies constitutes structured vocabularies in the form of directed acyclic graphs (DAGs) that represent a network in which each concept (“term” in the GO terminol-ogy) may be the “child” node of one or more than one “parent” node. An example of this from the GO molecular function vocabulary is the function concept transmem-brane receptor protein-tyrosine kinase and its relationship to other function concepts; it is a subclass both of the parent concept transmembrane receptor and of the parent concept protein tyrosine kinase. Figure II.4 provides a high-level view of the Gene Ontology structure. Several researchers have reported that the GO knowledge base has been used for background knowledge and other purposes. Moreover, the Gene Ontology Consor-tium has developed various specialized browsers and mapping tools to help devel-opers of external systems leverage the background knowledge extractable from the GO knowledge base. II.2.3 Domain Lexicons Text mining systems also leverage background knowledge contained in domain lexi-cons. The names of domain concepts – and the names of their relations – make up a domain ontology’s lexicon. The following definitions come from Hotho et al. (2003). 44 Core Text Mining Operations Top of ontology Parent term Child term Genes to which these GO terms are annotated This term has two parents Directed acyclic graph e.g. Lamin B receptor e.g. Werner syndrome helicase e.g. Bloom's syndrome protein AT Pase DNA-dependent AT Pase ATP-dependent helicase Holliday-junction helicase DNA helicase Enzyme Helicase Molecular function Binding Nucleic acid binding DNA binding Chromatin binding Lamin chromatin binding Figure II.4. Schematic of the Gene Ontology structure. (From GO Consortium 2001.) Definition II.19. Domain Lexicon: A lexicon for an ontology O is a tuple Lex: = (SC, RefC) consisting of a set SC, whose elements are called names of concepts, and a relation RefC ⊆SC × c called lexical reference for concepts for which (c, c) ∈RefC holds for all c ∈C ∩SC. Based on RefC, we define, for s ∈SC, RefC (s): = {c ∈C | (s, c) ∈RefC} and, for c ∈C, Ref−1C (c): = {s ∈SC | (s, c) ∈RefC}. For the typical situation – such as the WordNet example – of an ontology with a lexicon, one could also use a simple notation: Definition II.20. Domain Ontology with Lexicon: An ontology with lexicon is a pair (O, Lex), where O is an ontology and Lex is a lexicon for O. A lexicon such as that available with WordNet can serve as the entry point to background knowledge. Using a lexicon, a text mining system could normalize the concept identifiers available for annotation of documents in its corpus during prepro-cessing in a way that supports, by means of the lexicon’s related ontology, both the resolution of synonyms and the extraction of rich semantic relationship information about concepts. II.2.4 Introducing Background Knowledge into Text Mining Systems Background knowledge can be introduced into text mining systems in various ways and at various points in a text mining system’s architecture. Although there are may be any number of arguments about how background knowledge can enrich the value of knowledge discovery operations on document collections, there are three main practical reasons why background information is so universally important in text mining systems. II.2 Using Background Knowledge for Text Mining 45 First, background knowledge can be used in a text mining system to limit pattern abundance. Background knowledge can be crafted into constraints that allow for more efficient and meaningful queries; such constraints can be used for a variety of other purposes as well. Second, background knowledge is an extremely efficient mechanism for resolving questions of concept synonymy and polysemy at the level of search. Access to an ontology that stores both lexical references and relations allows for various types of resolution options. Third, background knowledge can be leveraged in preprocessing operations to create both a consistent lexical reference space and consistent hierarchies for concepts that will then be useful throughout other subsequent query, presentation, and refinement operations. Perhaps the simplest method to integrate background knowledge into a text mining system is by using it in the construction of meaningful query constraints. For instance, with respect to association discovery, concepts in a text mining system can be preprocessed into either some hierarchical form or clusters representing some limited number of categories or classes of concepts. These categories can then be compared against some relevant external knowledge source to extract interesting attributes for these categories and relations between categories. A tangible example of this kind of category- or class-oriented background knowl-edge constraint is a high-level category like company, which might, after reference to some commercial ontology of company information, be found to have com-monly occurring attributes such as ProductType, Officers, CEO, CFO, BoardMem-bers, CountryLocation, Sector, Size, or NumberOfEmployees. The category com-pany could also have a set of relations to other categories such as IsAPartnerOf, IsACustomerOf, IsASupplierTo, IsACompetitorTo, or IsASubsidiaryOf. These cate-gory attributes and relations could then be used as constraints available to a user on a pick list when forming a specific association-discovery query relating either to the class company or to a concept that is a particular member of that class. The resulting query expression (with constraint parameter) would allow the user to specify the LHS and RHS of his or her query more carefully and meaningfully. The inclusion of these types of constraints not only increases user interactivity with a text mining system because the user will be more involved in specifying interesting query parameters but can also limit the amount of unwanted patterns resulting from underspecified or inappropriately specified initial queries. Further, background information constraints can be used in an entirely different way – namely, in the formatting of presentation-level displays of query results. For instance, even if a user did not specify particular constraints as parameters to his or her query expression, a text mining system could still “add value” to the display of the result set by, for instance, highlighting certain associations for which particular preset constraint conditions have been met. An example of this might be that, in returning a result set to a query for all companies associated with crude oil, the system could highlight those companies identified as suppliers of crude oil in blue whereas those companies that are buyers of crude oil could be highlighted in red. Such color coding might aid in users’ exploration of data in the result set because these data provide more information to the user than simply presenting a bland listing of associations differentiated only by confidence level. Another common use of background knowledge is in the creation of consis-tent hierarchical representations of concepts in the document collection. During 46 Core Text Mining Operations preprocessing – or even during a query – groups of concepts can be compared against some normalized hierarchical form generated from an ontology. The resulting con-cept hierarchy has the benefit of being both informed by the domain knowledge about relationships collected in the ontology and more consistently integrated with the external source in the event that other types of system operations require refer-ence to information contained in the ontology. II.2.5 Real-World Example: FACT FACT (Finding Associations in Collections of Text) was a text mining system devel-oped by Feldman and others during the late 1990s. It represented a focused effort at enhancing association discovery by means of several constraint types supplied by a background knowledge source. In this, it created a very straightforward example of how background knowledge could be leveraged to clear practical effect in knowledge discovery operations on document collections. General Approach and Functionality The FACT system might essentially be seen as an advanced tool focused specifically on the discovery of associations in collections of keyword (concept)-labeled text documents. Centering on the association discovery query, the FACT system provided a robust query language through which a user could specify queries over the implicit collection of possible query results supported by the documents in the collection. Rather than requiring the specification of an explicit query expression in this language, FACT presented the user with a simple-to-use graphical interface in which a user’s various discovery tasks could be specified, and the underlying query language provided a well-defined semantics for the discovery actions performed by the user through the interface (see Figure II.5). Perhaps most importantly, FACT was able to exploit some basic forms of back-ground knowledge. Running against a document collection of newswire articles, FACT used a simple textual knowledge source (the CIA World Factbook) to exploit knowledgerelatingtocountries.FACTwasabletoleverageseveralattributesrelating to a country (size, population, export commodities, organizational memberships, etc.) as well as information about relationships between countries (e.g., whether countries were neighbors or trading partners, had a common language, had a common border, etc.). Using this background knowledge to construct meaningful constraints, FACT allowed a user, when making a query, to include constraints over the set of desired results. Finally, FACT also exploited these constraints in how it structured its search for possible results. This background knowledge thus enabled FACT to, for example, discover associations between a G7 country, for instance, that appeared as a concept label of a document and some other nonbordering G7 countries that also appeared as concept labels of the document. System Architecture FACT’s system architecture was straightforward. In a sense, all system components centered around the execution of a query (see Figure II.6). The system’s query II.2 Using Background Knowledge for Text Mining 47 Figure II.5. FACT’s query specification interface. (From Feldman and Hirsh 1997. Reprinted with permission of John Wiley and Sons.) execution core operations took three inputs – the annotated document collection, distilled background knowledge, and a user’s knowledge-discovery query – to create output that was passed to a presentation-layer tool that formatted the result set for display and user browsing. The system provided an easy-to-use interface for a user to compose and execute an association discovery query, supplemented by constraints for particular types of keywords that had been derived from an external knowledge source. The system then ran the fully constructed query against a document collection whose documents were represented by keyword annotations that had been pregenerated by a series of text categorization algorithms. Result sets could be returned in ways that also took advantage of the background knowledge–informed constraints. A user could explore a result set for a query and then refine it using a different combination of constraints. Implementation The document collection for the FACT system was created from the Reuters-22173 text categorization test collection, a collection of documents that appeared on the Reuters newswire in 1987. This collection obviated the need to build any system elements to preprocess the document data by using categorization algorithms. The Reuters-22173 documents were preassembled and preindexed with cate-gories by personnel from Reuters Ltd. and Carnegie Group, Inc., and some final formatting was manually applied. The Reuters personnel tagged each document 48 Core Text Mining Operations Parsing Graphical User Interface Text Categorization Algorithms Query Execution Presentation Module Keyword Annotated Documents Backgroud Knowledge Knowledge Sources Text Collections Query Associations Figure II.6. System architecture of FACT. (From Feldman and Hirsh 1997. Reprinted with permission of John Wiley and Sons.) with a subset of 135 keywords that fell into five overarching categories: countries, topics, people, organizations, and stock exchanges. The 1995 CIA World Factbook that served as the FACT system’s ostensible ontol-ogy amd background knowledge source was a structured document containing infor-mation about each of the countries of the world and was divided into six sections: Geography, People, Government, Economy, Communications, and Defense Forces. For experimentation with the Reuters-22173 data, the following background infor-mation was extracted for each country C: ■MemberOf: all organizations of which C is a member (e.g., G7, Arab League, EC), ■LandBoundaries: the countries that have a land border with C, ■NaturalResources: the natural resources of C (e.g., crude, coal, copper, gold), ■ExportCommodities: the main commodities exported by C (e.g., meat, wool, wheat), ■ExportPartners: the principal countries to which C exports its ExportCommodi-ties, ■ImportCommodities: the main commodities imported by C (e.g., meat, wool, wheat), II.2 Using Background Knowledge for Text Mining 49 Figure II.7. FACT’s background knowledge viewer showing the countries having land bound-aries with Saudi Arabia. (From Feldman and Hirsh 1997. Reprinted with permission of John Wiley and Sons.) ■ImportPartners: the principal countries from which C imports its Import Com-modities, ■Industries: the main industries of C (e.g., iron, steel, machines, textiles, chemi-cals), and ■Agriculture: the main agricultural products of C (e.g., grains, fruit, potatoes, cattle). The first boldfaced element before the colon defines a unary predicate, and the remainder of each entry constitutes a binary predicate over the set of keywords that can label the documents in the Reuters-22173 collection. Users could browse this background knowledge in FACT by means of a utility (see Figure II.7). For its main association-discovery algorithm, FACT implemented a version of the two-phase Apriori algorithm. After generating σ-covers, however, FACT modified the traditional association-discovery phase to handle the various types of constraints that had been generated from the CIA World Factbook. Upon completion of a query, FACT executed its query code and passed a result set back to a specialized presentation tool, the FACT system’s association browser. This browser performed several functions. First, it filtered out redun-dant results. Second, it organized results hierarchically – identifying commonalties among the various discovered associations and sorting them in decreasing order of confidence. Further, the tool housed this hierarchical, sorted representation of the result set in a screen presentation that enabled a user to browse the titles of documents supporting each of the individual associations in the result set simply by pointing and clicking on that association (see Figure II.8). 50 Core Text Mining Operations Figure II.8. FACT’s association browser presentation module showing a result set for asso-ciations of Arab League countries with countries sharing a border. (From Feldman and Hirsh 1997. Reprinted with permission of John Wiley and Sons.) Experimental Performance Results FACT appeared to perform well on queries of the form “find all associations between a set of countries including Iran and any person” and “find all associations between a set of topics including Gold and any country” as well as more complex queries that included constraints. One interesting – albeit still informal and crude – experiment performedonthesystemwastoseeiftherewasanyperformancedifference(basedon a comparison of CPU time) between query templates with and without constraints. In most cases, the queries involving constraints extracted from background knowledge appeared to be noticeably more efficient in terms of CPU time consumption. Some practical difficulties were encountered when trying to convert the CIA World Factbook into unary and binary predicates when the vocabulary in the Factbook differed from the universe of keywords labeling the Reuters documents (Feldman and Hirsh 1997). This is a problem that can creep into almost any text mining system that attempts to integrate background knowledge. FACT’s designers put in place a point solution to resolve this problem by including additional back-ground knowledge from a standard reference dictionary to help at least provide a basic definition of synonyms. Obviously, today, advanced text mining systems involving background knowl-edge can integrate with more sophisticated dictionary-type ontologies like WordNet to resolve problems with synonymy. Further, today’s designers of text mining sys-tems can also consider various strategies for including background knowledge in preprocessing routines to help create more consistency in the concept tags that II.3 Text Mining Query Languages 51 annotate document collections before the execution of any knowledge discovery algorithms. II.2.6 Citations and Notes Section II.2.1 For general discussion of the use of background knowledge to construct constraints in classic data mining, see Anand, Bell, and Hughes (1995) and Yoon et al. (1999). Kopanis, Avouris, and Daskalaki (2002) discusses other uses for background knowl-edge in data mining systems. Feldman and Hirsh (1996a) provides an early discussion of various uses of background knowledge within a text mining system. Section II.2.2 The informal definition for a domain ontology in Section II.2.2 comes from Craven and Kumlien (1999). The definition for a domain vocabulary was derived from Gruber (1993). Definition II.18 has been taken from Hotho et al. (2003); this source provides much of the background and definitional information for the topics dis-cussed throughout Sections II.2.2 through II.2.4. A large body of literature exists on the subject of WordNet, but the basic overview is contained in Martin (1995); the identification of WordNet as a “terminologi-cal knowledge base” also comes from this source. Descriptions of WordNet’s lex-icon, concept hierarchy, and ontological structure rely on information published in Rodriguez, Gomez-Hidalgo, and Diaz-Agudo (1997) and Hotho et al. (2003). The Gene Ontology knowledge base is described in GO Consortium (2000). The schematic of the GO knowledge base displayed in Figure II.4 comes from GO Consortium (2001); the example in Section II.2.2 involving the function concept transmembrane receptor protein-tyrosine kinase was also taken from this source. Hill et al. (2002) and Hirschman et al. (2002) have both reported use of the GO knowledge base for background knowledge purposes in knowledge discovery systems. Section II.2.3 Definitions II.19 and II.20 as well as the WordNet examples used in discussing these definitions come from Hotho et al. (2003). Sections II.2.4.–II.2.5 The FACT system is described in Feldman and Hirsh (1996a), Feldman and Hirsh (1996b), and Feldman and Hirsh (1997), and it influenced a substantial amount of later discussion of text mining systems (Landau, Feldman, Aumann, et al. 1998; Blake and Pratt 2001; Montes-y-Gomez et al. 2001b; Nahm and Mooney 2001; and others). Most of the descriptions of the FACT system found in Section II.2.5 derive from Feldman and Hirsh (1997). II.3 TEXT MINING QUERY LANGUAGES Querylanguagesforthetypeofgeneralizedtextminingsystemdescribedinthischap-ter must serve several straightforward purposes. First, these languages must allow 52 Core Text Mining Operations for the specification and execution of one of the text mining system’s search algo-rithms. Second, they generally need to allow for multiple constraints to be appended to a search argument; such constraints need to be specifiable by a user. Third, the query languages typically also need to perform some types of auxiliary filtering and redundancy to minimize pattern overabundance in result sets. Most text mining systems offer access to their query language either through a more abstracted and “friendly” interface that acts to assist the user by means of pick lists, pull-down menus, and scroll bars containing preset search types and constraints or through more direct “command-line” access to the query language that exposes query language expressions in their full syntax. Some text mining systems offer both. It is important in any implementation of a query language interface for designers of text mining systems to consider carefully the usage situations for the interfaces they provide. For instance, having a user-friendly, graphically oriented tool may greatly enhance a system’s ease of use, but if this tool severely limits the types of queries that may be performed it may not meet a strict cost–benefit analysis. Similarly, direct access to a text mining system’s query language to support the construction of ad hoc queries can be very advantageous for some users trying to experiment with queries involving complex combinations of constraints. If, however, such a direct query interface does not allow for robust storage, reuse, renaming, and editing of ad hoc queries as query templates, such “low level” access to the query language can become very inefficient and frustrating for users. II.3.1 Real World Example: KDTL The text mining query language KDTL (knowledge discovery in text language) was firstintroducedin1996asthequerylanguageenginesupportingtheFACTsystemand was subsequently more fully described as a central element of Feldman, Kloesgen, et al.’s later Document Explorer system. KDTL’s primary function is to provide a mechanism for performing queries that isolate interesting patterns. A Backus Naur Form (BNF) description of KDTL is shown in Figure II.9. KDTL supports all three main patter-discovery query types (i.e., distributions, frequent sets, and associations) as well as less common graphing outputs (i.e., key-word graph, directed keyword graph). Also notice that each query contains one algorithmic statement and several constraint statements. The constraint part of the query is structured in such a way that the user needs first to select a single relevant component – that is, the left-hand side (LHS) of the association, right-hand side (RHS), frequent set, or a path in a keyword graph. Then, all subsequent constraint statements are applied to this component. When specifying set relations, the user can optionally specify background pred-icates to be applied to the given expressions. KDTL intentionally contains some redundancy in the constraints statements to facilitate easier specification of queries. II.3.2 KDTL Query Examples Here are some typical examples of KDTL queries executed on the Reuters-22173 document collection used by FACT and described in Section II.5. II.3 Text Mining Query Languages 53 Algorithmic statements: gen_rule() : generate all matching association rules gen_frequent_set(): generate all matching frequent sets gen_kg() : generate a keyword graph gen_dkg() : generate a directed keyword graph gen_dist() : generate a distribution Constraint statements: set_filter() - the set MUST meet the following constraints set_not_filter() - the set MUST NOT meet the following constraints ::= frequent_set | left | right | path contain([], ) – the designated set must contain (or ()) subset([],) – the designated set is a subset of (or ()) disjoint([],) – the designated set and are disjoint (or ()) equal([],) – the designated set is equal to (or ()) all_has() – all members of the designated set are descendents of in the taxonomy one_has() – at least one of the members of the designated set is a descendent of property_count(,,) – # of members that are descendents of is in the specified range size(,,) – size of the designated set is in the specified range Set_conf(real) Set_supp(integer) ::= Keyword | Category | , | ; ::= integer ::= integer Figure II.9. BNF description of KDTL. (From Feldman, Kloesgen, and Zilberstein 1997a. Reprinted with permission of Springer Science and Business Media.) 54 Core Text Mining Operations In order to query only those associations that correlate between a set of countries including Iran and a person, the KDTL query expression would take the following form: set filter(left); all has({“countries”}); contain({“iran”}); set filter(right); all has({“people”}); property count(“people”,1,1); set supp(4); set conf(0.5); gen rule(); Run against the Reuters collection, the system would find four associations as a result of this particular query, all of which would have Reagan in the RHS. (6.54%) Iran, Nicaragua, USA ⇒Reagan (6.50%) Iran, Nicaragua ⇒Reagan (18.19%) Iran, USA ⇒Reagan (19.10%) Iran ⇒Reagan The interesting associations are those that include Iran and Nicaragua on the LHS. Upon querying the document collection, one can see that, when Iran and Nicaragua are in the document, then, if there is any person in the document, Rea-gan will be in that document too. In other words, the association Iran, Nicaragua, ⇒Reagan has 100-percent confidence and is supported by six documents. The constraint means that there must be at least one person name in the document. As another example, if one wanted to infer which people were highly correlated with West Germany (Reuters collection was from a period before the reunion of Germany), a query that looked for correlation between groups of one to three people and West Germany would be formulated. set filter(“left”); size(1,3); all has({“people”}); set filter(“right”); equal({“west germany”}); set supp(10); set conf(0.5); gen rule(); The system found five such associations; in all them the people on the LHS were senior officials of the West German government. Kohl was the Chancellor, Poehl was the president of the Central Bank, Bangemann was the Economic Minister, and Stoltenberg was the Finance Minister. If one wanted to infer from a document collection who the high officials of a given country are, a similar query would probably yield a reasonably accurate answer. This type of example can also be used to show how background knowledge can be leveraged to eliminate trivial associations. For instance, if a user is very famil-iar with German politics and not interested in getting these particular associations, he or she might like to see associations between people who are not German cit-izens and Germany. Adding the constraints set filter not(“left”); equal(nationality, “west germany”); will eliminate all the associations shown below. (8.100%) Poehl, Stoltenberg ⇒West Germany (6.100%) Bangemann ⇒West Germany (11.100%) Kohl ⇒West Germany (21.80%) Poehl ⇒West Germany (44.75%) Stoltenberg ⇒West Germany II.3 Text Mining Query Languages 55 II.3.3 KDTL Query Interface Implementations In Figures II.10 and II.11, one can see two elements of a sample GUI for defining KDTL queries. In the KDTL Query Editor (see Figure II.10), a user builds a query expression with one constraint at a time. The tabbed dialog boxes in Figure II.11 demonstrate how the user defines a single constraint. Several different types of set constraints are supported, including background and numerical size constraints. The results of a typical query – of the kind defined in Figures II.10 and II.11 – can be seen in Figure II.12. In this query, the object was to find all associations that connect a set of countries and a set of economical indicator topics if trade is not in the set. Only one association satisfies all these constraints. If the last constraint had been lifted – and one allowed “trade” to be in the RHS of the association – the system would have returned 18 associations. II.3.4 Citations and Notes Sections II.3–II.3.2 The descriptions of KDTL in Section II.3, as well as the example of the language and the various screen shots of query interfaces, primarily come from Feldman, Kloesgen, and Zilberstein (1997a). See also Feldman and Hirsh (1997). Figure II.10. Defining a KDTL query. (From Feldman, Kloesgen, and Zilberstein 1997a. Reprinted with permission of Springer Science and Business Media.) 56 Core Text Mining Operations Figure II.11. Defining a KDTL set constraint. (From Feldman, Kloesgen, and Zilberstein 1997a. Reprinted with permission of Springer Science and Business Media.) Figure II.12. Interface showing KDTL query results. (From Feldman, Kloesgen, and Zilberstein 1997a. Reprinted with permission of Springer Science and Business Media.) III Text Mining Preprocessing Techniques Effective text mining operations are predicated on sophisticated data preprocess-ing methodologies. In fact, text mining is arguably so dependent on the various preprocessing techniques that infer or extract structured representations from raw unstructured data sources, or do both, that one might even say text mining is to a degree defined by these elaborate preparatory techniques. Certainly, very different preprocessing techniques are required to prepare raw unstructured data for text min-ing than those traditionally encountered in knowledge discovery operations aimed at preparing structured data sources for classic data mining operations. A large variety of text mining preprocessing techniques exist. All in some way attempt to structure documents – and, by extension, document collections. Quite commonly, different preprocessing techniques are used in tandem to create struc-tured document representations from raw textual data. As a result, some typical combinations of techniques have evolved in preparing unstructured data for text mining. Two clear ways of categorizing the totality of preparatory document structuring techniques are according to their task and according to the algorithms and formal frameworks that they use. Task-oriented preprocessing approaches envision the process of creating a struc-tured document representation in terms of tasks and subtasks and usually involve some sort of preparatory goal or problem that needs to be solved such as extracting titles and authors from a PDF document. Other preprocessing approaches rely on techniques that derive from formal methods for analyzing complex phenomena that can be also applied to natural language texts. Such approaches include classification schemes, probabilistic models, and rule-based systems approaches. Categorizingtextminingpreprocessingtechniquesbyeithertheirtaskorientation or the formal frameworks from which they derive does not mean that “mixing and matching” techniques from either category for a given text mining application are prohibited. Most of the algorithms in text mining preprocessing activities are not specific to particular tasks, and most of the problems can be solved by several quite different algorithms. 57 58 Text Mining Preprocessing Techniques Each of the preprocessing techniques starts with a partially structured document and proceeds to enrich the structure by refining the present features and adding new ones. In the end, the most advanced and meaning-representing features are used for the text mining, whereas the rest are discarded. The nature of the input representation and the output features is the principal difference between the preprocessing techniques. There are natural language pro-cessing (NLP) techniques, which use and produce domain-independent linguistic features. There are also text categorization and IE techniques, which directly deal with the domain-specific knowledge. Often the same algorithm is used for different tasks, constituting several different techniques. For instance, hidden Markov models (HMMs) can successfully be used for both part-of-speech (POS) tagging and named-entity extraction. One of the important problems, yet unsolved in general, is to combine the processes of different techniques as opposed simply to combining the results. For instance, frequently part-of-speech ambiguities can easily be resolved by looking at the syntactic roles of the words. Similarly, structural ambiguities can often be resolved by using domain-specific information. Also, the bulk of any document does not contain relevant information but still must pass all of the processing stages before it can be discarded by the final one, which is extremely inefficient. It is impossible to use latter information for influencing the former processes. Thus, the processes must run simultaneously, influencing each other. Thealgorithmsusedfordifferenttasksare,however,usuallyverydifferentandare difficulttoredesigntoruntogether.Moreover,suchredesigningmakesthealgorithms strongly coupled, precluding any possibility of changing them later. Because there are several widely different algorithms for each of the separate tasks, all performing at more or less the same level, the designers of preprocessing architectures are very reluctant to commit themselves to any specific one and thus try to design their systems to be modular. Still, there have recently been some attempts to find an algorithm, or a mutually consistent set of algorithms, to perform most of the preprocessing task in a single big step. III.1 TASK-ORIENTED APPROACHES A document is an abstract entity that has a variety of possible actual representa-tions. Informally, the task of the document structuring process is to take the most “raw” representation and convert it to the representation through which the essence (i.e., the meaning) of the document surfaces. A divide-and-conquer strategy is typically employed to cope with this extremely difficult problem. The problem is separated into a set of smaller subtasks, each of which is solved separately. The subtasks can be divided broadly into three classes – preparatory processing, general purpose NLP tasks, and problem-dependent tasks. The complete hierarchy of text mining subtasks is shown in Figure III.1. Preparatory processing converts the raw representation into a structure suitable for further lin-guistic processing. For example, the raw input may be a PDF document, a scanned page, or even recorded speech. The task of the preparatory processing is to convert the raw input into a stream of text, possibly labeling the internal text zones such III.1 Task-Oriented Approaches 59 Preprocessing Tasks NLP Problem dependent Tokenization PoS tagging stemming Shallow Parsing Full Parsing NP Extraction Constituency Dependency Categorization Information Extraction Entity Extraction Relation Extraction Coreference Resolution Preparatory Perceptual Grouping Figure III.1. A taxonomy of text preprocessing tasks. as paragraphs, columns, or tables. It is sometimes also possible for the preparatory processing to extract some document-level fields such as or in cases in which the visual position of the fields allows their identification. The number of possible sources for documents is enormous, and the number of possible formats and raw representations is also huge. Very complex and powerful techniques are sometimes required to convert some of those formats into a conve-nient form. Optical character recognition (OCR), speech recognition, and conversion of electronic files from proprietary formats are described elsewhere at length and are beyond the scope of the discussion here. However, one generic task that is often crit-ical in text mining preprocessing operations and not widely covered in the literature of knowledge discovery might be called perceptual grouping. The general purpose NLP tasks process text documents using the general knowl-edge about natural language. The tasks may include tokenization, morphological analysis, POS tagging, and syntactic parsing – either shallow or deep. The tasks are general purpose in the sense that their output is not specific for any particular prob-lem. The output can rarely be relevant for the end user and is typically employed for further problem-dependent processing. The domain-related knowledge, however, can often enhance the performance of the general purpose NLP tasks and is often used at different levels of processing. Finally,theproblem-dependenttaskspreparethefinalrepresentationofthedocu-ment meaning. In text mining, categorization and information extraction are typically used. III.1.1 General Purpose NLP Tasks It is currently an orthodox opinion that language processing in humans cannot be separated into independent components. Various experiments in psycholinguistics clearly demonstrate that the different stages of analysis – phonetic, morphological, syntactical, semantical, and pragmatical – occur simultaneously and depend on each other. 60 Text Mining Preprocessing Techniques The precise algorithms of human language-processing are unknown, however, and although several systems do try to combine the stages into a coherent single process, the complete satisfactory solution has not yet been achieved. Thus, most of the text understanding systems employ the traditional divide-and-conquer strategy, separating the whole problem into several subtasks and solving them independently. In particular, it is possible to get quite far using only linguistics and no domain knowledge. The NLP components built in this way are valued for their generality. The tasks they are able to perform include tokenization and zoning, part-of-speech tagging and stemming, and shallow and deep syntactic parsing. Tokenization Prior to more sophisticated processing, the continuous character stream must be broken up into meaningful constituents. This can occur at several different levels. Documents can be broken up into chapters, sections, paragraphs, sentences, words, and even syllables or phonemes. The approach most frequently found in text mining systems involves breaking the text into sentences and words, which is called tokenization. The main challenge in identifying sentence boundaries in the English language is distinguishing between a period that signals the end of a sentence and a period that is part of a previous token like Mr., Dr., and so on. It is common for the tokenizer also to extract token features. These are usually simple categorical functions of the tokens describing some superficial property of the sequence of characters that make up the token. Among these features are types of capitalization, inclusion of digits, punctuation, special characters, and so on. Part-of-Speech Tagging POS tagging is the annotation of words with the appropriate POS tags based on the context in which they appear. POS tags divide words into categories based on the role they play in the sentence in which they appear. POS tags provide information about the semantic content of a word. Nouns usually denote “tangible and intangible things,” whereas prepositions express relationships between “things.” Most POS tag sets make use of the same basic categories. The most common set of tags contains seven different tags (Article, Noun, Verb, Adjective, Preposition, Number, and Proper Noun). Some systems contain a much more elaborate set of tags. For example, the complete Brown Corpus tag set has no less than 87 basic tags. Usually, POS taggers at some stage of their processing perform morphological analysis of words. Thus, an additional output of a POS tagger is a sequence of stems (also known as “lemmas”) of the input words. Syntactical Parsing Syntactical parsing components perform a full syntactical analysis of sentences according to a certain grammar theory. The basic division is between the constituency and dependency grammars. Constituency grammars describe the syntactical structure of sentences in terms of recursively built phrases – sequences of syntactically grouped elements. Most con-stituency grammars distinguish between noun phrases, verb phrases, prepositional phrases, adjective phrases, and clauses. Each phrase may consist of zero or smaller III.1 Task-Oriented Approaches 61 phrases or words according to the rules of the grammar. Additionally, the syntactical structure of sentences includes the roles of different phrases. Thus, a noun phrase may be labeled as the subject of the sentence, its direct object, or the complement. Dependency grammars, on the other hand, do not recognize the constituents as separate linguistic units but focus instead on the direct relations between words. A typical dependency analysis of a sentence consists of a labeled DAG with words for nodes and specific relationships (dependencies) for edges. For instance, a subject and direct object nouns of a typical sentence depend on the main verb, an adjective depends on the noun it modifies, and so on. Usually, the phrases can be recovered from a dependency analysis – they are the connected components of the sentence graph. Also, pure dependency analyses are very simple and convenient to use by themselves. Dependency grammars, however, have problems with certain common language constructions such as conjunctions. Shallow Parsing Efficient, accurate parsing of unrestricted text is not within the reach of current techniques. Standard algorithms are too expensive for use on very large corpora and are not robust enough. Shallow parsing compromises speed and robustness of processing by sacrificing depth of analysis. Instead of providing a complete analysis (a parse) of a whole sentence, shallow parsers produce only parts that are easy and unambiguous. Typically, small and simple noun and verb phrases are generated, whereas more complex clauses are not formed. Similarly,mostprominentdependenciesmightbeformed,butunclearandambiguous ones are left unresolved. For the purposes of information extraction, shallow parsing is usually suffi-cient and therefore preferable to full analysis because of its far greater speed and robustness. III.1.2 Problem-Dependent Tasks: Text Categorization and Information Extraction The final stages of document structuring create representations that are meaningful for either later (and more sophisticated) processing phases or direct interaction of the text mining system user. The text mining techniques normally expect the docu-ments to be represented as sets of features, which are considered to be structureless atomic entities possibly organized into a taxonomy – an IsA-hierarchy. The nature of the features sharply distinguishes between the two main techniques: text catego-rization and information extraction (IE). Both of these techniques are also popularly referred to as “tagging” (because of the tag-formatted structures they introduce in a processed document), and they enable one to obtain formal, structured representa-tions of documents. Text categorization and IE enable users to move from a “machine readable” representation of the documents to a “machine understandable” form of the documents. This view of the tagging approach is depicted in Figure III.2. Text categorization (sometime called text classification) tasks tag each document with a small number of concepts or keywords. The set of all possible concepts or key-words is usually manually prepared, closed, and comparatively small. The hierarchy relation between the keywords is also prepared manually. 62 Text Mining Preprocessing Techniques Raw Content Actionable Information Machine Readable Machine Understandable Web Pages News Stories CRM Email Technical documentation Document Management Systems Search Personalization Analysis Alerting Decision Support Tagging Figure III.2. Bridging the gap between raw data and actionable information. IE is perhaps the most prominent technique currently used in text mining pre-processing operations. Without IE techniques, text mining systems would have much more limited knowledge discovery capabilities. IE must often be distinguished from information retrieval or what is more infor-mally called “search.” Information retrieval returns documents that match a given query but still requires the user to read through these documents to locate the rele-vant information. IE, on the other hand, aims at pinpointing the relevant information and presenting it in a structured format – typically in a tabular format. For analysts and other knowledge workers, IE can save valuable time by dramatically speeding up discovery-type work. III.2 FURTHER READING POS Tagging Please refer to Maltese and Mancini (1991), Brill (1992), Kupiec (1992), Schutze (1993), and Brill (1995) for further details about POS tagging. Shallow Parsing The following papers discuss how to perform shallow parsing of documents: Tzoukermann, Klavans, and Jacquemin (1997); Lager (1998); Daelemans, Buchholz, and Veenstra (1999); Lewin et al. (1999); Munoz et al. (1999); and Punyakanok and Roth (2000). Constituency Grammars Information on constituency grammers can be found in Reape (1989), Keller (1992), and Pollard and Sag (1994). III.2 Further Reading 63 Dependency Grammars The following papers provide more information about dependency grammars: Lombardo (1991), Carroll and Charniak (1992), Rambow and Joshi (1994), Lin (1995), and Neuhaus and Broker (1997). General Information Extraction A general overview of the information extraction field can be found in Cowie and Lehnert (1996), Grishman (1996), Cardie (1997), and Grishman (1997). IV Categorization Probably the most common theme in analyzing complex data is the classification, or categorization, of elements. Described abstractly, the task is to classify a given data instance into a prespecified set of categories. Applied to the domain of document management, the task is known as text categorization (TC) – given a set of categories (subjects, topics) and a collection of text documents, the process of finding the correct topic (or topics) for each document. The study of automated text categorization dates back to the early 1960s (Maron 1961). Then, its main projected use was for indexing scientific literature by means of controlled vocabulary. It was only in the 1990s that the field fully developed with the availability of ever increasing numbers of text documents in digital form and the necessitytoorganizethemforeasieruse.NowadaysautomatedTCisappliedinavari-ety of contexts – from the classical automatic or semiautomatic (interactive) indexing of texts to personalized commercials delivery, spam filtering, Web page categoriza-tion under hierarchical catalogues, automatic generation of metadata, detection of text genre, and many others. As with many other artificial intelligence (AI) tasks, there are two main approaches to text categorization. The first is the knowledge engineering approach in which the expert’s knowledge about the categories is directly encoded into the sys-tem either declaratively or in the form of procedural classification rules. The other is the machine learning (ML) approach in which a general inductive process builds a classifier by learning from a set of preclassified examples. In the document man-agement domain, the knowledge engineering systems usually outperform the ML systems, although the gap in performance steadily shrinks. The main drawback of the knowledge engineering approach is what might be called the knowledge acqui-sition bottleneck – the huge amount of highly skilled labor and expert knowledge required to create and maintain the knowledge-encoding rules. Therefore, most of the recent work on categorization is concentrated on the ML approach, which requires only a set of manually classified training instances that are much less costly to produce. This chapter is organized as follows. We start with the description of several common applications of text categorization. Then the formal framework and the 64 IV.1 Applications of Text Categorization 65 issues of problem representation are described. Next we survey the most commonly used algorithms solving the TC problem and wrap up with the issues of experimental evaluation and a comparison between the different algorithms. IV.1 APPLICATIONS OF TEXT CATEGORIZATION Three common TC applications are text indexing, document sorting and text filtering, and Web page categorization. These are only a small set of possible applications, but they demonstrate the diversity of the domain and the variety of the TC subcases. IV.1.1 Indexing of Texts Using Controlled Vocabulary The topic of most of the early research in the TC field is text indexing. In Boolean information retrieval (IR) systems, each document in a big collection is assigned one or more key terms describing its content. Then, the IR system is able to retrieve the documents according to the user queries, which are based on the key terms. The key terms all belong to a finite set called controlled vocabulary, which is often a thematic hierarchical thesaurus such as the NASA aerospace thesaurus or the MESH thesaurus for medicine. The task of assigning keywords from a controlled vocabulary to text documents is called text indexing. If the keywords are viewed as categories, then text indexing is an instance of the general TC problem and can be addressed by the automatic techniques described in this chapter. Typically, each document should receive at least one, and not more than k, key-words. Also, the task can be solved either fully automatically or semiautomatically, in which case the user selects a set of keywords from a ranked list supplied by a TC system. Automatic indexing can be a part of automated extraction of metadata. The meta-data describe a document in a variety of aspects, some of which are thematic – related to the contents of the document – the bibliographic codes, key terms, and so on. Extraction of this metadata can be viewed as a document indexing problem, which can be tackled by TC techniques. IV.1.2 Document Sorting and Text Filtering Another common problem related but distinct from document indexing is sorting the given collection of documents into several “bins.” For instance, in a newspaper, the classified ads may need to be categorized into “Personal,” “Car Sale,” “Real Estate,” and so on. Another example is e-mail coming into an organization, which may need to be sorted into categories such as “Complaints,” “Deals,” “Job applications,” and others. The document sorting problem has several features that distinguish it from the related tasks. The main difference is the requirement that each document belong to exactly one category. Other typical features are relatively small numbers of categories and the “online” nature of the task: The documents to be categorized are usually presented to the classifier one by one, not as a single batch. 66 Categorization Text filtering activity can be seen as document sorting with only two bins – the “relevant” and “irrelevant” documents. Examples of text filtering abound. A sports-related online magazine should filter out all nonsport stories it receives from the news feed. An e-mail client should filter away spam. A personalized ad filtering system should block any ads that are uninteresting to the particular user. For most of the TC systems, recall errors (which arise when a category is missing some document that should have been assigned to it) and precision errors (which occur when a category includes documents that should not belong to it) are con-sidered to have about the same cost. For many of the filtering tasks, however, the recall errors (e.g., an important letter is considered spam and hence is missing from the “good documents” category) are much more costly than precision errors (some of the spam still passes through, and thus the “good documents” category contains some extra letters). For personalized filtering systems it is common for the user to provide the feed-back to the system – by marking received documents as relevant or irrelevant. Because it is usually computationally unfeasible to fully retrain the system after each document, adaptive learning techniques are required (see Bibliography). IV.1.3 Hierarchical Web Page Categorization A common use of TC is the automatic classification of Web pages under the hierar-chical catalogues posted by popular Internet portals such as Yahoo. Such catalogues are very useful for direct browsing and for restricting the query-based search to pages belonging to a particular topic. The other applications described in this section usually constrain the number of categories to which a document may belong. Hierarchical Web page categorization, however, constrains the number of documents belonging to a particular category to prevent the categories from becoming excessively large. Whenever the number of documents in a category exceeds k, it should be split into two or more subcategories. Thus, the categorization system must support adding new categories and deleting obsolete ones. Another feature of the problem is the hypertextual nature of the documents. The Web documents contain links, which may be important sources of information for the classifier because linked documents often share semantics. The hierarchical structure of the set of categories is also uncommon. It can be dealt with by using a separate classifier at every branching point of the hierarchy. IV.2 DEFINITION OF THE PROBLEM The general text categorization task can be formally defined as the task of approx-imating an unknown category assignment function F : D × C →{0, 1}, where D is the set of all possible documents and C is the set of predefined categories. The value of F(d, c) is 1 if the document d belongs to the category c and 0 otherwise. The approximating function M : D × C →{0, 1} is called a classifier, and the task is to build a classifier that produces results as “close” as possible to the true category assignment function F. IV.2 Definition of the Problem 67 IV.2.1 Single-Label versus Multilabel Categorization Depending on the properties of F, we can distinguish between single-label and multilabel categorization. In multilabel categorization the categories overlap, and a document may belong to any number of categories. In single-label categorization, each document belongs to exactly one category. Binary categorization is a special case of single-label categorization in which the number of categories is two. The binary case is the most important because it is the simplest, most common, and most often used for the demonstration of categorization techniques. Also, the general single-label case is frequently a simple generalization of the binary case. The multilabel case can be solved by |C| binary classifiers (|C| is the number of categories), one for each category, provided the decisions to assign a document to different categories are independent from each other. IV.2.2 Document-Pivoted versus Category-Pivoted Categorization Usually, the classifiers are used in the following way: Given a document, the classifier finds all categories to which the document belongs. This is called a document-pivoted categorization. Alternatively, we might need to find all documents that should be filed under a given category. This is called a category-pivoted categorization. The difference is significant only in the case in which not all documents or not all categories are immediately available. For instance, in “online” categorization, the documents come in one-by-one, and thus only the document-pivoted categorization is possible. On the other hand, if the categories set is not fixed, and if the documents need to be reclassified with respect to the newly appearing categories, then category-pivoted categorization is appropriate. However, most of the techniques described in this chapter allow both. IV.2.3 Hard versus Soft Categorization A fully automated categorization system makes a binary decision on each document-category pair. Such a system is said to be doing the hard categorization. The level of performance currently achieved by fully automatic systems, however, may be insufficient for some applications. Then, a semiautomated approach is appropriate in which the decision to assign a document to a category is made by a human for whom the TC system provides a list of categories arranged by the system’s estimated appropriateness of the category for the document. In this case, the system is said to be doing the soft or ranking categorization. Many classifiers described in this chapter actually have the whole segment [0, 1] as their range – that is, they produce a real value between zero and one for each document-category pair. This value is called a categorization status value (CSV). Such “continuous” classifiers naturally perform ranking categorization, but if a binary decision is needed it can be produced by checking the CSV against a specific threshold. Various possible policies exist for setting the threshold. For some types of clas-sifiers it is possible to calculate the thresholds analytically, using decision-theoretic measures such as utility. There are also general classifier-independent methods. Fixed thresholding assigns exactly k top-ranking categories to each document. Proportional thresholding sets the threshold in such a way that the same fraction of the test set 68 Categorization belongs to a category as to the corresponding fraction of the training set. Finally, the most common method is to set the value of the threshold in such a way as to maxi-mize the performance of the classifier on a validation set. The validation set is some portion of the training set that is not used for creating the model. The sole purpose of the validation set is to optimize some of the parameters of the classifier (such as the threshold). Experiments suggest that the latter method is usually superior to the others in performance (Lewis 1992a, 1992b; Yang 1999). IV.3 DOCUMENT REPRESENTATION The common classifiers and learning algorithms cannot directly process the text doc-uments in their original form. Therefore, during a preprocessing step, the documents are converted into a more manageable representation. Typically, the documents are represented by feature vectors. A feature is simply an entity without internal struc-ture – a dimension in the feature space. A document is represented as a vector in this space – a sequence of features and their weights. The most common bag-of-words model simply uses all words in a document as the features, and thus the dimension of the feature space is equal to the number of different words in all of the documents. The methods of giving weights to the features may vary. The simplest is the binary in which the feature weight is either one – if the corresponding word is present in the document – or zero otherwise. More complex weighting schemes are possible that take into account the frequencies of the word in the document, in the category, and in the whole collection. The most common TF-IDF scheme gives the word w in the document d the weight TF-IDF Weight (w, d) = TermFreq(w, d) · log (N / DocFreq(w)), where TermFreq(w, d) is the frequency of the word in the document, N is the num-ber of all documents, and DocFreq(w) is the number of documents containing the word w. IV.3.1 Feature Selection The number of different words is large even in relatively small documents such as short news articles or paper abstracts. The number of different words in big docu-ment collections can be huge. The dimension of the bag-of-words feature space for a big collection can reach hundreds of thousands; moreover, the document represen-tation vectors, although sparse, may still have hundreds and thousands of nonzero components. Most of those words are irrelevant to the categorization task and can be dropped with no harm to the classifier performance and may even result in improvement owing to noise reduction. The preprocessing step that removes the irrelevant words is called feature selection. Most TC systems at least remove the stop words – the function words and in general the common words of the language that usually do not contribute to the semantics of the documents and have no real added value. Many systems, however, perform a much more aggressive filtering, removing 90 to 99 percent of all features. IV.3 Document Representation 69 In order to perform the filtering, a measure of the relevance of each feature needs to be defined. Probably the simplest such measure is the document frequency DocFreq(w). Experimental evidence suggests that using only the top 10 percent of the most frequent words does not reduce the performance of classifiers. This seems to contradict the well-known “law” of IR, according to which the terms with low-to-medium document frequency are the most informative. There is no contra-diction, however, because the large majority of all words have a very low document frequency, and the top 10 percent do contain all low-to-medium frequency words. More sophisticated measures of feature relevance exist that take into account the relations between features and the categories. For instance, the information gain IG(w) = c∈C∪C f∈{w,w} P(f, c) · log P(c | f) P(c) measures the number of bits of information obtained for the prediction of categories by knowing the presence or absence in a document of the feature f. The probabilities are computed as ratios of frequencies in the training data. Another good measure is the chi-square χ2 max( f ) = max c∈C |Tr| · (P( f, c) · P( ¯ f , ¯ c) −P( f, ¯ c) · P( ¯ f , c))2 P( f ) · P( ¯ f ) · P(c) · P(¯ c) , which measures the maximal strength of dependence between the feature and the categories. Experiments show that both measures (and several other measures) can reduce the dimensionality by a factor of 100 without loss of categorization quality – or even with a small improvement (Yang and Pedersen 1997). IV.3.2 Dimensionality Reduction by Feature Extraction Another way of reducing the number of dimensions is to create a new, much smaller set of synthetic features from the original feature set. In effect, this amounts to creat-ing a transformation from the original feature space to another space of much lower dimension. The rationale for using synthetic features rather than naturally occur-ring words (as the simpler feature filtering method does) is that, owing to polysemy, homonymy, and synonymy, the words may not be the optimal features. By transform-ing the set of features it may be possible to create document representations that do not suffer from the problems inherent in those properties of natural language. Term clustering addresses the problem of synonymy by grouping together words with a high degree of semantic relatedness. These word groups are then used as features instead of individual words. Experiments conducted by several groups of researchers showed a potential in this technique only when the background infor-mation about categories was used for clustering (Baker and McCallum 1998; Slonim and Tishby 2001). With unsupervised clustering, the results are inferior (Lewis 1992a, 1992b; Li and Jain 1998). A more systematic approach is latent semantic indexing (LSI). The details of this method are described in Chapter V. For the TC problem, the performance of the LSI also improves if the categories information is used. Several LSI representations, one for each category, outperform a single global LSI representation. The experiments also show that LSI usually performs better than the chi-square filtering scheme. 70 Categorization IV.4 KNOWLEDGE ENGINEERING APPROACH TO TC The knowledge engineering approach to TC is focused around manual development of classification rules. A domain expert defines a set of sufficient conditions for a document to be labeled with a given category. The development of the classification rules can be quite labor intensive and tedious. We mention only a single example of the knowledge engineering approach to the TC – the well-known CONSTRUE system (Hayes, Knecht, and Cellio 1988; Hayes et al. 1990; Hayes and Weinstein 1990; Hayes 1992) built by the Carnegie group for Reuters. A typical rule in the CONSTRUE system is as follows: if DNF (disjunction of conjunctive clauses) formula then category else ¬category Such rule may look like the following: If ((wheat & farm) or (wheat & commodity) or (bushels & export) or (wheat & tonnes) or (wheat & winter & ¬soft)) then Wheat else ¬Wheat The system was reported to produce a 90-percent breakeven between precision and recall on a small subset of the Reuters collection (723 documents). It is unclear whether the particular chosen test collection influenced the results and whether the system would scale up, but such excellent performance has not yet been unattained by machine learning systems. However, the knowledge acquisition bottleneck that plagues such expert systems (it took several man-years to develop and fine-tune the CONSTRUE system for Reuters) makes the ML approach attractive despite possibly somewhat lower quality results. IV.5 MACHINE LEARNING APPROACH TO TC In the ML approach, the classifier is built automatically by learning the properties of categories from a set of preclassified training documents. In the ML terminology, the learning process is an instance of supervised learning because the process is guided by applying the known true category assignment function on the training set. The unsupervised version of the classification task, called clustering, is described in Chapter V. There are many approaches to classifier learning; some of them are variants of more general ML algorithms, and others have been created specifically for categorization. Four main issues need to be considered when using machine learning techniques to develop an application based on text categorization. First, we need to decide on the categories that will be used to classify the instances. Second, we need to provide a training set for each of the categories. As a rule of thumb, about 30 examples are needed for each category. Third, we need to decide on the features that represent each of the instances. Usually, it is better to generate as many features as possible IV.5 Machine Learning Approach to TC 71 because most of the algorithms will be able to focus just on the relevant features. Finally, we need to decide on the algorithm to be used for the categorization. IV.5.1 Probabilistic Classifiers Probabilistic classifiers view the categorization status value CSV(d, c) as the prob-ability P(c | d) that the document d belongs to the category c and compute this probability by an application of Bayes’ theorem: P(c | d) = P(d | c)P(c) P(d) . The marginal probability P(d) need not ever be computed because it is constant for all categories. To calculate P(d | c), however, we need to make some assumptions about the structure of the document d. With the document representation as a feature vector d = (w1, w2, . . .), the most common assumption is that all coordinates are independent, and thus P(d | c) = i P(wi | c). The classifiers resulting from this assumption are called Na¨ ıve Bayes (NB) classi-fiers. They are called “na¨ ıve” because the assumption is never verified and often is quite obviously false. However, the attempts to relax the na¨ ıve assumption and to use the probabilistic models with dependence so far have not produced any signif-icant improvement in performance. Some theoretic justification to this unexpected robustness of the Na¨ ıve Bayes classifiers is given in Domingos and Pazzani (1997). IV.5.2 Bayesian Logistic Regression It is possible to model the conditional probability P(c | d) directly. Bayesian logistic regression (BLR) is an old statistical approach that was only recently applied to the TC problem and is quickly gaining popularity owing to its apparently very high performance. Assuming the categorization is binary, we find that the logistic regression model has the form P(c | d) = ϕ(β · d) = ϕ i βidi , where c = ±1 is the category membership value (±1 is used instead of {0, 1} for simpler notation), d = (d1, d2, . . .) is the document representation in the feature space, β = (β1, β2, . . .) is the model parameters vector, and ϕ is the logistic link function ϕ(x) = exp(x) 1 + exp(x) = 1 1 + exp(−x). Care must be taken in order for a logistic regression model not to overfit the training data. The Bayesian approach is to use a prior distribution for the param-eter vector β that assigns a high probability to each βi’s being at or near zero. Different priors are possible, and the commonly used are Gaussian and Laplace priors. 72 Categorization The simplest is the Gaussian prior with zero mean and variance τ: p(βi | τ) = N(0, τ) = 1 √ 2πτ exp −β2 i 2τ . If the a priori independence of the components of β and the equality of variances τ for all components are assumed, the overall prior for β is the product of the priors for βi. With this prior, the maximum a posteriori (MAP) estimate of β is equivalent to ridge regression for the logistic model. The disadvantage of the Gaussian prior in the TC problem is that, although it favorstheparametervalues’beingclosetozero,theMAPestimatesoftheparameters will rarely be exactly zero; thus, the model will not be sparse. The alternative Laplace prior does achieve sparseness: p(βi | λ) = λ 2 exp(−λ |βi|). Using this kind of prior represents a belief that a small portion of the input variables has a substantial effect on the outcome, whereas most of the other variables are unimportant. This belief is certainly justifiable for the TC task. The particular value of the hyperparameter λ (and τ for the Gaussian prior) can be chosen a priori or optimized using a validation set. In an “ideal” setting, the posterior distribution of β would be used for the actual prediction. Owing to computational cost constraints, however, it is common to use a point estimate of β, of which the posterior mode (any value of β at which the posterior distribution of β takes on its maximal value) is the most common. The log-posterior distribution of β is l(β) = p(β | D) = − ⎛ ⎝ (d,c)∈D ln(exp(−c β · d) + 1) ⎞ ⎠+ ln p(β), where D = {(d1, c1), (d2, c2) . . .} is the set of training documents di and their true category membership values ci = ±1, and p(β) is the chosen prior: ln p(β) = −  i ln √τ + ln 2π 2 + β2 i τ  , for Gaussian prior, and ln p(β) = −  i (ln 2 −ln λ + λ | βi |)  , for Laplace prior. The MAP estimate of β is then simply arg maxβ l(β), which can be computed by any convex optimization algorithm. IV.5.3 Decision Tree Classifiers Many categorization methods share a certain drawback: The classifiers cannot be easily understood by humans. The symbolic classifiers, of which the decision tree classifiers are the most prominent example, do not suffer from this problem. A decision tree (DT) classifier is a tree in which the internal nodes are labeled by the features, the edges leaving a node are labeled by tests on the feature’s weight, and the leaves are labeled by categories. A DT categorizes a document by starting at the root of the tree and moving successively downward via the branches whose IV.5 Machine Learning Approach to TC 73 Negative Examples Positive Examples Hyperplane Figure IV.1. A Decision Tree classifier. conditions are satisfied by the document until a leaf node is reached. The document is then assigned to the category that labels the leaf node. Most of the DT classifiers use a binary document representation, and thus the trees are binary. For example, the tree that corresponds to the CONSTRUE rule mentioned in Section IV.4 may look like Figure IV.1. Most of the DT-based systems use some form of general procedure for a DT induction such as ID3, C4.5, and CART. Typically, the tree is built recursively by picking a feature f at each step and dividing the training collection into two subcol-lections, one containing f and another not containing f, until only documents of a single category remain – at which point a leaf node is generated. The choice of a fea-ture at each step is made by some information-theoretic measure such as information gain or entropy. However, the trees generated in such a way are prone to overfit the training collection, and so most methods also include pruning – that is, removing the too specific branches. The performance of a DT classifier is mixed but is inferior to the top-ranking classifiers. Thus it is rarely used alone in tasks for which the human understanding of the classifier is not essential. DT classifiers, however, are often used as a baseline for comparison with other classifiers and as members of classifier committees. IV.5.4 Decision Rule Classifiers Decision rule (DR) classifiers are also symbolic like decision trees. The rules look very much like the disjunctive normal form (DNF) rules of the CONSTRUE system but are built from the training collection using inductive rule learning. Typically, the rule learning methods attempt to select the best rule from the set of all possible covering rules (i.e., rules that correctly classify all training examples) according to some optimality criterion. DNF rules are often built in a bottom-up fashion. The initial most specific classifier is built from the training set by viewing each training document as a clause d1 ∧d2 ∧. . . ∧dn →c, where di are the features of the document and c its category. The learner then applies a series of generalizations (e.g., by removing terms from the clauses and 74 Categorization by merging rules), thus maximizing the compactness of the rules while keeping the covering property. At the end of the process, a pruning step similar to the DT pruning is applied that trades covering for more generality. Rule learners vary widely in their specific methods, heuristics, and optimality criteria. One of the prominent examples of this family of algorithms is RIPPER (repeated incremental pruning to produce error reduction) (Cohen 1995a; Cohen 1995b; Cohen and Singer 1996). Ripper builds a rule set by first adding new rules until all positive category instances are covered and then adding conditions to the rules until no negative instance is covered. One of the attractive features of Ripper is its ability to bias the performance toward higher precision or higher recall as determined by the setting of the loss ratio parameter, which measures the relative cost of “false negative” and “false positive” errors. IV.5.5 Regression Methods Regression is a technique for approximating a real-valued function using the knowl-edge of its values on a set of points. It can be applied to TC, which is the problem of approximating the category assignment function. For this method to work, the assignment function must be considered a member of a suitable family of continuous real-valued functions. Then the regression techniques can be applied to generate the (real-valued) classifier. One regression method is the linear least-square fit (LLSF), which was first applied to TC in Yang and Chute (1994). In this method, the category assignment function is viewed as a |C| × |F| matrix, which describes some linear transformation from the feature space to the space of all possible category assignments. To build a classifier, we create a matrix that best accounts for the training data. The LLSF model computes the matrix by minimizing the error on the training collection according to the formula M = arg minM||MD −O||F, where D is the |F| × |TrainingCollection| matrix of the training document represen-tations, O is the |C| × |TrainingCollection| matrix of the true category assignments, and the || · ||F is the Frobenius norm ||A||F =  A2 i j. The matrix M can be computed by performing singular value decomposition on the training data. The matrix element mij represents the degree of association between the ith feature and the jth category. IV.5.6 The Rocchio Methods The Rocchio classifier categorizes a document by computing its distance to the pro-totypical examples of the categories. A prototypical example for the category c is a vector (w1, w2, . . .) in the feature space computed by wi = α POS(c) d∈POS(c) wdi − β NEG(c) d∈NEG(c) wdi, IV.5 Machine Learning Approach to TC 75 where POS(c) and NEG(c) are the sets of all training documents that belong and do not belong to the category c, respectively, and wdi is the weight of ith feature in the document d. Usually, the positive examples are much more important than the negative ones, and so α > > β. If β = 0, then the prototypical example for a category is simply the centroid of all documents belonging to the category. The Rocchio method is very easy to implement, and it is cheap computationally. Its performance, however, is usually mediocre – especially with categories that are unions of disjoint clusters and in, general, with the categories that are not linearly separable. IV.5.7 Neural Networks Neural network (NN) can be built to perform text categorization. Usually, the input nodes of the network receive the feature values, the output nodes produce the cat-egorization status values, and the link weights represent dependence relations. For classifying a document, its feature weights are loaded into the input nodes; the acti-vation of the nodes is propagated forward through the network, and the final values on output nodes determine the categorization decisions. The neural networks are trained by back propagation, whereby the training doc-uments are loaded into the input nodes. If a misclassification occurs, the error is propagated back through the network, modifying the link weights in order to mini-mize the error. The simplest kind of a neural network is a perceptron. It has only two layers – the input and the output nodes. Such network is equivalent to a linear classifier. More complex networks contain one or more hidden layers between the input and output layers. However, the experiments have shown very small – or no – improvement of nonlinear networks over their linear counterparts in the text categorization task (Schutze, Hull, and Pederson 1995; Wiener 1995). IV.5.8 Example-Based Classifiers Example-based classifiers do not build explicit declarative representations of cate-gories but instead rely on directly computing the similarity between the document to be classified and the training documents. Those methods have thus been called lazy learners because they defer the decision on how to generalize beyond the training data until each new query instance is encountered. “Training” for such classifiers consists of simply storing the representations of the training documents together with their category labels. The most prominent example of an example-based classifier is kNN (k-nearest neighbor). To decide whether a document d belongs to the category c, kNN checks whether the k training documents most similar to d belong to c. If the answer is positive for a sufficiently large proportion of them, a positive decision is made; oth-erwise, the decision is negative. The distance-weighted version of kNN is a vari-ation that weighs the contribution of each neighbor by its similarity to the test document. In order to use the algorithm, one must choose the value of k. It can be optimized using a validation set, but it is probable that a good value can be picked a priori. 76 Categorization Larkey and Croft (Larkey and Croft 1996) use k = 20, whereas Yang (Yang 2001) has found 30 ≤k ≤45 to yield the best effectiveness. Various experiments have shown that increasing the value of k does not significantly degrade the performance. The kNN is one of the best-performing text classifiers to this day. It is robust in the sense of not requiring the categories to be linearly separated. Its only drawback is the relatively high computational cost of classification – that is, for each test document, its similarity to all of the training documents must be computed. IV.5.9 Support Vector Machines The support vector machine (SVM) algorithm is very fast and effective for text classification problems. In geometrical terms, a binary SVM classifier can be seen as a hyperplane in the feature space separating the points that represent the positive instances of the category from the points that represent the negative instances. The classifying hyper-plane is chosen during training as the unique hyperplane that separates the known positive instances from the known negative instances with the maximal margin. The margin is the distance from the hyperplane to the nearest point from the positive and negative sets. The diagram shown in Figure IV.2 is an example of a maximal margin hyperplane in two dimensions. It is interesting to note that SVM hyperplanes are fully determined by a relatively small subset of the training instances, which are called the support vectors. The rest of the training data have no influence on the trained classifier. In this respect, the SVM algorithm appears to be unique among the different categorization algorithms. The SVM classifier has an important advantage in its theoretically justified approach to the overfitting problem, which allows it to perform well irrespective of the dimensionality of the feature space. Also, it needs no parameter adjustment wheat? farm? Wheat yes yes commodity? no Wheat yes tonnes? no Wheat yes winter? no yes soft? Wheat no ¬Wheat yes no ¬Wheat no bushels? no ¬Wheat yes tonnes? ¬Wheat no Wheat yes Figure IV.2. Diagram of a 2-D Linear SVM. IV.5 Machine Learning Approach to TC 77 because there is a theoretically motivated “default” choice of parameters that has also been shown experimentally to provide the best performance. IV.5.10 Classifier Committees: Bagging and Boosting The idea of using committees of classifiers stems from the intuition that a team of experts, by combining their knowledge, may produce better results than a single expert alone. In the bagging method of building committees, the individual classifiers are trained in parallel on the same training collection. In order for the committee to work, the classifiers must differ significantly from each other – either in their document representation or in their learning methods. In text categorization, the latter method is usually chosen. As this chapter suggests, there is certainly no shortage of widely different learning methods. Assume there are k different classifiers. To build a single committee classifier, one must choose the method of combining their results. The simplest method is the majority vote in which a category is assigned to a document iff at least (k+1)/2 classifiers decide this way (k must be an odd number, obviously). Another possibility, suited for continuous output, is the weighted linear combination, whereby the final CSV is given by a weighted sum of the CSVs of the k classifiers. The weights can be estimated on a validation dataset. Other methods of combining classifiers are also possible. Boosting is another method of improving the quality of categorization by using several classifiers. Unlike the bagging method, in boosting the classifiers are trained sequentially. Before training the ith classifier, the training set is reweighed with greater weight given to the documents that were misclassified by the previous clas-sifiers. The AdaBoost algorithm is the best known example of this approach. It is defined as follows: Let X be the feature space, and let D = {(d1, c1), (d2, c3), . . .} be the training data, where di ∈X are the training document representations and ci ∈{+1, −1} the category assignment (binary). A weak learner is some algorithm that is able to produce a weak hypothesis (classifier) h : X →{ ± 1} given the training data D together with a weight distribution W upon it. The “goodness” of a hypothesis is measured by its error ε(h, W) = i : h(di)̸=ci W(i), which is the sum of weights of misclassified documents. The AdaBoost algorithm ■Initializes weights distribution W 1(i) = 1/|D|for all i, and ■Repeats for t = 1, . . . , k. Train a weak classifier ht using the current weights Wt. Let αt = 1 2 ln 1−ε(ht,W t) ε(ht,W t) . 78 Categorization Update the weights: W t+1(i) = Zt · W t(i)·  exp(−αt), if ht(di) = ci, exp(αt), otherwise. (Zt is the normalization factor chosen so that  i W t+1(i) = 1). ■The final classifier is H(d) = sign  t=1..k αtht(d )  . It can be proved that, if the weak learner is able to generate classifiers with error ε < 1 2 −λ for any fixed λ > 0 (which means, if the weak classifiers are any better than random), then the training error for the final classifier drops exponentially fast with the number k of algorithm steps. It can also be shown that AdaBoost has close relations with SVM, for it also maximizes the margin between training instances. Because of this, AdaBoost also has a similar resistance to overfitting. IV.6 USING UNLABELED DATA TO IMPROVE CLASSIFICATION All of the ML classifiers require fairly large training collections of preclassified doc-uments. The task of manually labeling a large number of documents, although much less costly than manually creating a classification knowledge base, is still usually quite a chore. On the other hand, unlabeled documents usually exist in abundance, and any amount of them can be acquired with little cost. Therefore, the ability to improve the classifier performance by augmenting a relatively small number of labeled documents with a large number of unlabeled ones is very useful for applications. The two com-mon ways of incorporating knowledge from unlabeled documents are expectation maximization (EM) and cotraining. EM works with probabilistic generative classifiers such as Na¨ ıve Bayes. The idea is to find the most probable model given both labeled and unlabeled documents. The EM algorithm performs the optimization in a simple and appealing way: ■First, the model is trained over the labeled documents. ■Then the following steps are iterated until convergence in a local maximum occurs: E-step: the unlabeled documents are classified by the current model. M-step: the model is trained over the combined corpus. In the M-step, the category assignments of the unlabeled documents are assumed to be fractional according to the probabilities produced by the E-step. Cotrainingworkswiththedocuments,forwhichtwoviewsareavailable,providing two different document representations, both of which are sufficient for classifica-tion. For example, a Web page may have its content as one view and the anchor text appearing in the hyperlinks to the page as another. In the domain of MedLine papers, the abstract may be one view and the whole text another. The cotraining is a bootstrapping strategy in which the unlabeled documents classified by means of one of the views are then used for training the classifier using the other view, and vice versa. Both EM and cotraining strategies have experimentally shown a significant reduc-tion (up to 60%) in the amount of labeled training data required to produce the same classifier performance. IV.7 Evaluation of Text Classifiers 79 IV.7 EVALUATION OF TEXT CLASSIFIERS Because the text categorization problem is not sufficiently well-defined, the perfor-mance of classifiers can be evaluated only experimentally. Any TC experiment requires a document collection labeled with a set of cate-gories. This collection is divided into two parts: the training and test document sets. The training set, as the name suggests, is used for training the classifier, and the test set is the one on which the performance measures are calculated. Usually, the test set is the smaller of the two. It is very important not to use the test set in any way during the classifier training and fine-tuning. When there is a need to optimize some classifier parameters experimentally, the training set is further divided into two parts – the training set proper and a validation set, which is used for the parameter optimizations. A commonly used method to smooth out the variations in the corpus is the n-fold cross-validation. In this method, the whole document collection is divided into n equal parts, and then the training-and-testing process is run n times, each time using a different part of the collection as the test set. Then the results for n folds are averaged. IV.7.1 Performance Measures The most common performance measures are the classic IR measures of recall and precision. A recall for a category is defined as the percentage of correctly classified documents among all documents belonging to that category, and precision is the percentage of correctly classified documents among all documents that were assigned to the category by the classifier. Many classifiers allow trading recall for precision or vice versa by raising or lowering parameter settings or the output threshold. For such classifiers there is a convenient measure, called the breakeven point, which is the value of recall and precision at the point on the recall-versus-precision curve where they are equal. Alternatively, there is the F1 measure, equal to 2/(1/recall + 1/precision), which combines the two measures in an ad hoc way. IV.7.2 Benchmark Collections The most known publicly available collection is the Reuters set of news stories, classified under economics-related categories. This collection accounts for most of the experimental work in TC so far. Unfortunately, this does not mean that the results produced by different researchers are directly comparable because of subtle differences in the experimental conditions. Inorderfortheresultsoftwoexperimentstobedirectlycomparable,thefollowing conditions must be met: (1) The experiments must be performed on exactly the same collection (mean-ing the same documents and same categories) using the same split between training and test sets. (2) The same performance measure must be chosen. 80 Categorization (3) If a particular part of a system is compared, all other parts must be exactly the same. For instance, when comparing learning algorithms, the docu-ment representations must be the same, and when comparing the dimension reduction methods, the learning algorithms must be fixed together with their parameters. These conditions are very difficult to meet – especially the last one. Thus, in practice, the only reliable comparisons are those done by the same researcher. Other frequently used benchmark collections are the OHSUMED collection of titlesandabstractsofpapersfrommedicaljournalscategorizedwithMESHthesaurus terms, 20 Newsgroups collection of messages posted to newsgroups with the news-groups themselves as categories, and the TREC-AP collection of newswire stories. IV.7.3 Comparison among Classifiers Given the lack of a reliable way to compare classifiers across researchers, it is possible to draw only very general conclusions in reference to the question Which classifier is the best? ■According to most researchers, the top performers are SVM, AdaBoost, kNN, and Regression methods. Insufficient statistical evidence has been compiled to determine the best of these methods. Efficiency considerations, implementa-tion complexity, and other application-related issues may assist in selecting from among these classifiers for specific problems. ■Rocchio and Na¨ ıve Bayes have the worst performance among the ML classifiers, but both are often used as baseline classifiers. Also, NB is very useful as a member of classifier committees. ■There are mixed results regarding the neural networks and decision tree clas-sifiers. Some of the experiments have demonstrated rather poor performance, whereas in other experiments they performed nearly as well as SVM. IV.8 CITATIONS AND NOTES Section IV.1 Applications of text categorization are described in Hayes et al. (1988); Ittner, Lewis, and Ahn (1995); Larkey (1998); Lima, Laender, and Ribeiro-Neto (1998); Attardi, Gulli, and Sebastiani (1999); Drucker, Vapnik, and Wu (1999); Moens and Dumortier (2000); Yang, Ault, Pierce, and Lattimer (2000); Gentili et al. (2001); Krier and Zacca (2002); Fall et al. (2003); and Giorgetti and Sebastiani (2003a, 2003b). Section IV.2 For a general introduction to text categorization, refer to Sebastiani (2002) and Lewis (2000), which provides an excellent tutorial on the subject. Section IV.3 Approaches that integrate linguistic and background knowledge into the categoriza-tion process can be found in Jacobs (1992); Rodriguez et al. (1997); Aizawa (2001); and Benkhalifa, Mouradi, and Bouyakhf (2001a, 2001b). IV.8 Citations and Notes 81 Section IV.5.3–IV.5.4 The following papers discuss how to use decision trees and decision lists for text categorization: Apte, Damerau, and Weiss (1994a, 1994b, 1994c); Li and Yamanishi (1999); Chen and Ho (2000); and Li and Yamanishi (2002). Section IV.5.5 The use of regression for text categorization is discussed in Zhang and Oles (2001), Zhang et al. (2003), and Zhang and Yang (2003). Section IV.5.8 The kNN algorithm is discussed and described in Yavuz and Guvenir (1998); Han, Karypis, and Kumar (2001); Soucy and Mineau (2001b); and Kwon and Lee (2003). Section IV.5.9 The SVM algorithm is described and discussed in Vapnik (1995); Joachims (1998); Kwok (1998); Drucker, Vapnik, et al. (1999); Joachims (1999); Klinkenberg and Joachims (2000); Siolas and d’Alche-Buc (2000); Tong and Koller (2000); Joachims (2001); Brank et al. (2002); Joachims (2002); Leopold and Kindermann (2002); Diederich et al. (2003); Sun, Naing, et al. (2003); Xu et al. (2003); and Zhang and Lee (2003). Section IV.5.10 Approaches that combine several algorithms by using committees of algorithms or by using boosting are described in Larkey and Croft (1996); Liere and Tade-palli (1997); Liere and Tadepalli (1998); Forsyth (1999); Ruiz and Srinivasan (1999a, 1999b); Schapire and Singer (2000); Sebastiani, Sperduti, and Valdambrini (2000); Al-Kofahi et al. (2001); Bao et al. (2001); Lam and Lai (2001); Taira and Haruno (2001); and Nardiello, Sebastiani, and Sperduti (2003). Additional Algorithms There are several adaptive (or online) algorithms that build classifiers incrementally without requiring the whole training set to be present at once. A simple perceptron is described in Schutze et al. (1995) and Wiener (1995). A Winnow algorithm, which is a multiplicative variant of perceptron, is described in Dagan, Karov, and Roth (1997). Other online algorithms include Widrow Hoff, Exponentiated Gradient (Lewis et al. 1996), and Sleeping Experts (Cohen and Singer 1999). Relational and rule-based approaches to text categorization are discussed in Cohen (1992); Cohen (1995a, 1995b); and Cohen and Hirsh (1998). Section IV.7 Comparisons between the categorization algorithms are discussed in Yang (1996) and Yang (1999). V Clustering Clustering is an unsupervised process through which objects are classified into groups called clusters. In categorization problems, as described in Chapter IV, we are pro-vided with a collection of preclassified training examples, and the task of the system is to learn the descriptions of classes in order to be able to classify a new unlabeled object. In the case of clustering, the problem is to group the given unlabeled collec-tion into meaningful clusters without any prior information. Any labels associated with objects are obtained solely from the data. Clustering is useful in a wide range of data analysis fields, including data mining, document retrieval, image segmentation, and pattern classification. In many such problems, little prior information is available about the data, and the decision-maker must make as few assumptions about the data as possible. It is for those cases the clustering methodology is especially appropriate. Clustering techniques are described in this chapter in the context of textual data analysis. Section V.1 discusses the various applications of clustering in text analysis domains. Sections V.2 and V.3 address the general clustering problem and present several clustering algorithms. Finally Section V.4 demonstrates how the clustering algorithms can be adapted to text analysis. V.1 CLUSTERING TASKS IN TEXT ANALYSIS One application of clustering is the analysis and navigation of big text collections such as Web pages. The basic assumption, called the cluster hypothesis, states that relevant documents tend to be more similar to each other than to nonrelevant ones. If this assumption holds for a particular document collection, the clustering of documents based on the similarity of their content may help to improve the search effectiveness. V.1.1 Improving Search Recall Standard search engines and IR systems return lists of documents that match a user query. It is often the case that the same concepts are expressed by different terms in different texts. For instance, a “car” may be called “automobile,” and a query for 82 V.1 Clustering Tasks in Text Analysis 83 “car” would miss the documents containing the synonym. However, the overall word contentsofrelatedtextswouldstillbesimilardespitetheexistenceofmanysynonyms. Clustering, which is based on this overall similarity, may help improve the recall of a query-based search in such a way that when a query matches a document its whole cluster can be returned. This method alone, however, might significantly degrade precision because often there are many ways in which documents are similar, and the particular way to cluster them should depend on the particular query. V.1.2 Improving Search Precision As the number of documents in a collection grows, it becomes a difficult task to browse through the lists of matched documents given the size of the lists. Because the lists are unstructured, except for a rather weak relevance ordering, he or she must know the exact search terms in order to find a document of interest. Otherwise, the he or she may be left with tens of thousands of matched documents to scan. Clustering may help with this by grouping the documents into a much smaller number of groups of related documents, ordering them by relevance, and returning only the documents from the most relevant group or several most relevant groups. Experience, however, has shown that the user needs to guide the clustering pro-cess so that the clustering will be more relevant to the user’s specific interest. An interactive browsing strategy called scatter/gather is the development of this idea. V.1.3 Scatter/Gather The scatter/gather browsing method (Cutting et al. 1992; Hearst and Pedersen 1996) uses clustering as a basic organizing operation. The purpose of the method is to enhance the efficiency of human browsing of a document collection when a specific search query cannot be formulated. The method is similar to the techniques used for browsing a printed book. An index, which is similar to a very specific query, is used for locating specific information. However, when a general overview in needed or a general question is posed, a table of contents, which presents the logical structure of the text, is consulted. It gives a sense of what sorts of questions may be answered by more intensive exploration of the text, and it may lead to the particular sections of interest. During each iteration of a scatter/gather browsing session, a document collection isscatteredintoasetofclusters,andtheshortdescriptionsoftheclustersarepresented to the user. Based on the descriptions, the user selects one or more of the clusters that appear relevant. The selected clusters are then gathered into a new subcollection with which the process may be repeated. In a sense, the method dynamically generates a table of contents for the collection and adapts and modifies it in response to the user’s selection. V.1.4 Query-Specific Clustering Direct approaches to making the clustering query-specific are also possible. The hier-achical clustering is especially appealing because it appears to capture the essense 84 Clustering of the cluster hypothesis best. The most related documents will appear in the small tight clusters, which will be nested inside bigger clusters containing less similar doc-uments. The work described in Tombros, Villa, and Rijsbergen (2002) tested the cluster hypothesis on several document collections and showed that it holds for query-specific clustering. Recent experiments with cluster-based retrieval (Liu and Croft 2003) using lan-guage models show that this method can perform consistently over document col-lections of realistic size, and a significant improvement in document retrieval can be obtained using clustering without the need for relevance information from by the user. V.2 THE GENERAL CLUSTERING PROBLEM A clustering task may include the following components (Jain, Murty, and Flynn 1999): ■Problem representation, including feature extraction, selection, or both, ■Definition of proximity measure suitable to the domain, ■Actual clustering of objects, ■Data abstraction, and ■Evaluation. Here we describe the representation of a general clustering problem and several common general clustering algorithms. Data abstraction and evaluation of clustering results are usually very domain-dependent and are discussed in Section V.4, which is devoted to clustering of text data. V.2.1 Problem Representation All clustering problems are, in essence, optimization problems. The goal is to select the best among all possible groupings of objects according to the given clustering quality function. The quality function maps a set of possible groupings of objects into the set of real numbers in such a way that a better clustering would be given a higher value. A good clustering should group together similar objects and separate dissimilar ones. Therefore, the clustering quality function is usually specified in terms of a sim-ilarity function between objects. In fact, the exact definition of a clustering quality function is rarely needed for clustering algorithms because the computational hard-ness of the task makes it infeasible to attempt to solve it exactly. Therefore, it is suffi-cient for the algorithms to know the similarity function and the basic requirement – that similar objects belong to the same clusters and dissimilar to separate ones. A similarity function takes a pair of objects and produces a real value that is a measure of the objects’ proximity. To do so, the function must be able to compare the internal structure of the objects. Various features of the objects are used for this purpose. As was mentioned in Chapter I, feature extraction is the process of generating the sets of features representing the objects, and feature selection is the process of identifying the most effective subset of the extracted features. V.3 Clustering Algorithms 85 The most common vector space model assumes that the objects are vectors in the high-dimensional feature space. A common example is the bag-of-words model of text documents. In a vector space model, the similarity function is usually based on the distance between the vectors in some metric. V.2.2 Similarity Measures The most popular metric is the usual Euclidean distance D(xi, xj) =  k (xi k −xj k)2, which is a particular case with p = 2 of Minkowski metric Dp(xi, xj) =  k (xi k −xj k)p 1/p . For the text documents clustering, however, the cosine similarity measure is the most common: Sim(xi, xj) = (x′ i · x′ j) = k x′ i k · x′ j k, where x′ is the normalized vector x = x/|x|. There are many other possible similarity measures suitable for their particular purposes. V.3 CLUSTERING ALGORITHMS Severaldifferentvariantsofanabstractclusteringproblemexist.Aflat(orpartitional) clustering produces a single partition of a set of objects into disjoint groups, whereas a hierarchical clustering results in a nested series of partitions. Each of these can either be a hard clustering or a soft one. In a hard clustering, every object may belong to exactly one cluster. In soft clustering, the membership is fuzzy – objects may belong to several clusters with a fractional degree of membership in each. Irrespective of the problem variant, the clustering optimization problems are computationally very hard. The brute-force algorithm for a hard, flat clustering of n-element sets into k clusters would need to evaluate kn/ k! possible partitionings. Even enumerating all possible single clusters of size l requires n!/l!(n −l)!, which is exponential in both n and l. Thus, there is no hope of solving the general optimiza-tion problem exactly, and usually some kind of a greedy approximation algorithm is used. Agglomerative algorithms begin with each object in a separate cluster and succes-sively merge clusters until a stopping criterion is satisfied. Divisive algorithms begin with a single cluster containing all objects and perform splitting until a stopping criterion is met. “Shuffling” algorithms iteratively redistribute objects in clusters. The most commonly used algorithms are the K-means (hard, flat, shuffling), the EM-based mixture resolving (soft, flat, probabilistic), and the HAC (hierarchical, agglomerative). 86 Clustering V.3.1 K-Means Algorithm The K-means algorithm partitions a collection of vectors {x1, x2, . . . xn} into a set of clusters {C1, C2, . . . Ck}. The algorithm needs k cluster seeds for initialization. They can be externally supplied or picked up randomly among the vectors. The algorithm proceeds as follows: Initialization: k seeds, either given or selected randomly, form the core of k clusters. Every other vector is assigned to the cluster of the closest seed. Iteration: The centroids Mi of the current clusters are computed: M i = |Ci|−1 x∈ci x. Each vector is reassigned to the cluster with the closest centroid. Stopping condition: At convergence – when no more changes occur. The K-means algorithm maximizes the clustering quality function Q: Q(C1, C2, . . . , Ck) = C1 x∈Ci Sim(x −M i). If the distance metric (inverse of the similarity function) behaves well with respect to the centroids computation, then each iteration of the algorithm increases the value of Q. A sufficient condition is that the centroid of a set of vectors be the vector that maximizes the sum of similarities to all the vectors in the set. This condition is true for all “natural” metrics. It follows that the K-means algorithm always converges to a local maximum. The K-means algorithm is popular because of its simplicity and efficiency. The complexity of each iteration is O(kn) similarity comparisons, and the number of necessary iterations is usually quite small. A major problem with the K-means algorithm is its sensitivity to the initial selec-tion of seeds. If a bad set of seeds is used, the generated clusters are often very much suboptimal. Several methods are known to deal with this problem. The sim-plest way is to make several clustering runs with different random choices of seeds. Another possibility is to choose the initial seeds utilizing external domain-dependent information. Several algorithmic methods of dealing with the K-means suboptimality also exist. One possibility is to allow postprocessing of the resulting clusters. For instance, the ISO-DATA algorithm (Jensen 1996) merges clusters if the distance between their centroids is below a certain threshold, and this algorithm splits clusters having excessively high variance. Another possibility is employed by the Buckshot algorithm described at the end of this section. The best number of clusters, in cases where it is unknown, can be computed by running the K-means algorithm with different values of k and choosing the best one according to any clustering quality function. V.3 Clustering Algorithms 87 V.3.2 EM-based Probabilistic Clustering Algorithm The underlying assumption of mixture-resolving algorithms is that the objects to be clustered are drawn from k distributions, and the goal is to identify the parameters of each that would allow the calculation of the probability P(Ci | x) of the given object’s belonging to the cluster Ci. The expectation maximization (EM) is a general purpose framework for estimat-ing the parameters of distribution in the presence of hidden variables in observable data. Adapting it to the clustering problem produces the following algorithm: Initialization: The initial parameters of k distributions are selected either randomly or exter-nally. Iteration: E-Step:ComputetheP(Ci |x)forallobjectsxbyusingthecurrentparametersof the distributions. Relabel all objects according to the computed probabilities. M-Step: Reestimate the parameters of the distributions to maximize the likeli-hood of the objects’ assuming their current labeling. Stopping condition: At convergence – when the change in log-likelihood after each iteration becomes small. After convergence, the final labelings of the objects can be used as the fuzzy clustering. The estimated distributions may also be used for other purposes. V.3.3 Hierarchical Agglomerative Clustering (HAC) The HAC algorithm begins with each object in separate cluster and proceeds to repeatedly merge pairs of clusters that are most similar according to some chosen criterion. The algorithm finishes when everything is merged into a single cluster. The history of merging provides the binary tree of the clusters hierarchy. Initialization: Every object is put into a separate cluster. Iteration: Find the pair of most similar clusters and merge them. Stopping condition: When everything is merged into a single cluster. Different versions of the algorithm can be produced as determined by how the similarity between clusters is calculated. In the single-link method, the similarity between two clusters is the maximum of similarities between pairs of objects from the two clusters. In the complete-link method, the similarity is the minimum of simi-larities of such pairs of objects. The single-link approach may result in long and thin chainlike clusters, whereas the complete-link method results in tight and compact clusters. Although the single-link method is more versatile, experience suggests that the complete-link one produces more useful results. Other possible similarity measures include “center of gravity” (similarity between centroids of clusters), “average link” (average similarity between pairs of objects of 88 Clustering clusters), and a “group average” (average similarity between all pairs of objects in a merged cluster), which is a compromise between the single- and complete-link methods. The complexity of HAC is O(n2s), where n is the number of objects and s the complexity of calculating similarity between clusters. For some object similarity mea-sures it is possible to compute the group average cluster similarity in constant time, making the complexity of HAC truly quadratic. By definition, the group average similarity between clusters Ci and Cj is Sim(Ci, Cj) = 1 |Ci ∪Cj|(|Ci ∪Cj| −1) x,y∈Ci∪Cj,x̸=y Sim(x, y). Assuming that the similarity between individual vector is the cosine similarity, we have Sim(Ci, Cj) = (Si + Sj) · (Si + Sj) −(|Ci| + |Cj|) |Ci ∪Cj|(|Ci ∪Cj| −1) , where Si =  x∈Ci x is the sum of all vectors in the ith cluster. If all Si’s are always maintained, the cosine similarity between clusters can always be computed in a con-stant time. V.3.4 Other Clustering Algorithms Several graph-theoretic clustering algorithms exist. The best known is based on con-struction of the minimal spanning tree (MST) of the objects and then deleting the edges with the largest lengths to generate clusters. In fact, the hierarchical approaches are also related to graph theoretic clustering. Single-link clusters are subgraphs of the MST, which are also the connected components (Gotlieb and Kumar 1968). Complete-link clusters are the maximal complete subgraphs (Backer and Hubert 1976). The nearest neighbor clustering (Lu and Fu 1978) assigns each object to the cluster of its nearest labeled neighbor object provided the similarity to that neighbor is sufficiently high. The process continues until all objects are labeled. The Buckshot algorithm (Cutting et al. 1992) uses the HAC algorithm to generate a good initial partitioning for use by the K-means algorithm. For this purpose, √ kn objects are randomly selected, and the group-average HAC algorithm is run on the set. The k clusters generated by HAC are used to initialize the K-means algorithm, which is then run on the whole set of n objects. Because the complexity of HAC is quadratic, the overall complexity of Buckshot remains O(kn) linear in the number of objects. V.4 CLUSTERING OF TEXTUAL DATA The clustering of textual data has several unique features that distinguish it from other clustering problems. This section discusses the various issues of representation, algorithms, data abstraction, and evaluation of text data clustering problems. V.4 Clustering of Textual Data 89 V.4.1 Representation of Text Clustering Problems The most prominent feature of text documents as objects to be clustered is their very complex and rich internal structure. In order to be clustered, the documents must be converted into vectors in the feature space. The most common way of doing this, the bag-of-words document representation, assumes that each word is a dimension in the feature space. Each vector representing a document in this space will have a component for each word. If a word is not present in the document, the word’s component of the document vector will be zero. Otherwise, it will be some positive value, which may depend on the frequency of the word in the document and in the whole document collection. The details and the different possibilities of the bag-of-words document representation are discussed in Section IV. One very important problem arises for clustering – feature selection. With big document collections, the dimension of the feature space may easily range into the tens and hundreds of thousands. Because of this, feature selection methods are very important for performance reasons. Many good feature selection methods are available for categorization, but they make use of the distribution of featuresinclassesasfoundinthetrainingdocuments.Thisdistributionisnotavailable for clustering. There are two possible ways of reducing the dimensionality of documents. Local methods do not reduce the dimension of the whole feature space but simply delete “unimportant”componentsfromindividualdocumentvectors.Becausethecomplex-ity of calculating the similarity between documents is proportional to the number of nonzero components in the documentvectors, such truncation is effective. In practice, the document vectors themselves are already quite sparse, and only the centroids, which can be very dense, need truncation. Thealternativeapproachisaglobaldimensionreduction.Itsdisadvantageisthatit does not adapt to unique characteristics of each document. The advantage is that this method better preserves the ability to compare dissimilar documents because every document undergoes identical transformation. One increasingly popular technique of dimension reduction is based on latent semantic indexing (LSI). V.4.2 Dimension Reduction with Latent Semantic Indexing Latent semantic indexing linearly maps the N-dimensional feature space F onto a lower dimensional subspace in a provably optimal way, in the following sense: among all possible subspaces V ∈F of dimension k, and all possible linear maps M from F onto V, the map given by the LSI perturbs the documents the least, so that the  d ∈documents |D −M (d)|2 is minimal. LSI is based upon applying the singular value decomposition (SVD) to the term-document matrix. V.4.3 Singular Value Decomposition An SVD of a real m×n matrix A is a representation of the matrix as a product A = UDVT, 90 Clustering where U is a column-orthonormal m×r matrix, D is a diagonal r×r matrix, and V is a column-orthonormal n×r matrix in which r denotes the rank of A. The term “column-orthonormal” means that the column vectors are normalized and have a zero dot-product; thus, UUT = VTV = I. The diagonal elements of D are the singular values of A and can all be chosen to be positive and arranged in a descending order. Then the decomposition becomes unique. There are many methods of computing the SVD of matrices. See Berry (1992) for methods adapted to large but sparse matrices. Using SVD for Dimension Reduction The dimension reduction proceeds in the following steps. First, a terms-by-documents rectangular matrix A is formed. Its columns are the vector representations of doc-uments. Thus, the matrix element Atd is nonzero when the term t appears in the document d. Then, the SVD of the matrix A is calculated: A = UDVT. Next the dimension reduction takes place. We keep the k highest values in the matrix D and set others to zero, resulting in the matrix D′. It can be shown that the matrix A′ = UD′VT is the matrix of rank k that is closest to A in the least-squares sense. The cosine similarity between the original document vectors is given by the dot product of their corresponding columns in the A matrix. The reduced-dimensional approximation is calculated as the dot product of the columns of A′. Of course, the A′ itself need never be calculated. Instead, we can see that A′TA′ = VD′TUTUD′VT = VD′TD′VT, and thus the representation of documents in the low-dimensional LSI space is given by the rows of the VDT matrix, and the dot product can be calculated between those k-dimensional rows. Medoids It is possible to improve the speed of text clustering algorithms by using medoids instead of centroids (mentioned in Section V.3). Medoids are actual documents that are most similar to the centroids. This improves the speed of algorithms in a way similar to feature space dimensionality reduction because sparse document vectors are substituted for dense centroids. Using Na¨ ıve Bayes Mixture Models with the EM Clustering Algorithm For the EM-based fuzzy clustering of text documents, the most common assump-tion is the Na¨ ıve Bayes model of cluster distribution. This model has the following V.4 Clustering of Textual Data 91 parameters: the prior cluster probability P(Ci) and the probabilities P(fi | Ci) of features in the cluster. Given the model parameters, the probability that a document belongs to a cluster is P(Ci|x) = P(Ci) f P( f |Ci)  C P(C) f P( f |C). On the assumption that the current document labeling is L(x), the maximum likelihood estimation of the parameters is P(Ci) = |{x : L(x) = Ci}|/N, P( f |Ci) = |{x : L(x) = Ci and f ∈x }|/|{x : L(x) = Ci}|, where N is the number of documents. Using this method it is possible to improve categorization systems in cases in which the number of labeled documents is small but many unlabeled documents are available. Then the labeled documents can be used to train the initial NB models, which are then used within the EM algorithm to cluster the unlabeled documents. The final cluster models are the output classifiers produced by this technique. The experiments have shown a significant improvement in accuracy over the classifiers that are trained only on labeled data (Nigam et al. 2000). V.4.4 Data Abstraction in Text Clustering Data abstraction in clustering problems entails generating a meaningful and concise description of the cluster for the purposes of further automatic processing or for user consumption. The machine-usable abstraction is usually easiest; natural candidates are cluster centroids or probabilistic models of clusters. In the case of text clustering, the problem is to give the user a meaningful cluster label. For some applications, such as scatter/gather browsing, a good label is almost as important as good clustering. A good label would consist of a very small number of terms precisely distinguishing the cluster from the others. For instance, after clus-tering documents about “jaguar,” we would like one cluster to be named “Animal” and another “Car.” There are many possibilities of generating cluster labels automatically: ■A title of the medoid document or several typical document titles can be used. ■Severalwordscommontotheclusterdocumentscanbeshown.Acommonheuris-tic is to present the five or ten most frequent terms in the centroid vector of the cluster. ■A distinctive noun phrase, if it can be found, is probably the best label. V.4.5 Evaluation of Text Clustering Measuring the quality of an algorithm is a common problem in text as well as data mining. It is easy to compare the exact measures, such as time and space complexity, 92 Clustering but the quality of the results needs human judgment, which introduces a high degree of subjectivity. The “internal” measures of clustering quality are essentially the functions we would like to optimize by the algorithms. Therefore, comparing such measures for clusterings produced by different algorithms only shows which algorithm results in a better approximation of the general optimization problem for the particular case. This makes some sense, but what we would like to see is a measure of how good the clustering is for human consumption or for further processing. Given a set of categorized (manually classified) documents, it is possible to use this benchmark labeling for evaluation of clusterings. The most common measure is purity. Assume {L1, L2, . . . , Ln} are the manually labeled classes of documents, and {C1, C2, . . . , Cm} are the clusters returned by the clustering process. Then, Purity(Ci) = max j|Lj ∩Ci|/|Ci|. Other measures include the entropy of classes in clusters, mutual information between classes and clusters, and so on. However, all these measures suffer from the limitation that there is more than one way to classify documents – all equally right. Probablythemostusefulevaluationisthestraightforwardmeasureoftheutilityof the resulting clustering in its intended application. For instance, assume the clustering is used for improving the navigation of search results. Then it is possible to prepare a set of queries and the intended results manually and to measure the improvement produced by clustering directly using simulated experiments. V.5 CITATIONS AND NOTES Section V.1 The scatter/gather method was introduced by Cutting in Cutting et al. (1992) and fur-ther expanded in Cutting, Karger, et al. (1993). Application and analysis of the scat-ter/gather methods are described in Cutting, Karger, et al. (1992); Cutting, Karger, and Pedersen (1993); Hearst, Karger, and Pedersen (1995); and Hearst and Pedersen (1996). Section V.3 Descriptions of general clustering algorithms and comparisons between them can be foundinthefollowingpapers:Mock(1998);ZhongandGhosh(2003);JainandDubes (1988); Goldszmidt and Sahami (1998); Jain et al. (1999); and Steinbach, Karypis, and Kumar (2000). Algorithms for performing clustering on very large amount of data are described in Bradley, Fayyad, and Reina (1998) and Fayyad, Reina, and Bradley (1998). Section V.4 Clustering by using latent semantic indexing (LSI) is described in the following papers:Deerwesteretal.(1990);Hull(1994);andLandauer,Foltz,andLaham(1998). In many cases there is a need to utilize background information and external knowl-edge bases. Clustering using backround information is described in Hotho et al. V.5 Citations and Notes 93 (2003), and clustering using ontologies is described in Hotho, Staab, and Maedche (2001). Clustering using the popular WordNet resource is mentioned in Benkhalifa, Mouradi, and Bouyakhf (2001a, 2000b). Specific clustering algorithms adapted for textual data are described in Iwayama and Tokunaga (1995a, 1995b); Goldszmidt and Sahami (1998); Zamir and Etzioni (1999); El-Yaniv and Souroujon (2001); and Dhillon, Mallela, and Kumar (2002). VI Information Extraction VI.1 INTRODUCTION TO INFORMATION EXTRACTION A mature IE technology would allow rapid creation of extraction systems for new tasks whose performance would approach a human level. Nevertheless, even systems without near perfect recall and precision can be of real value. In such cases, the results of the IE system would need to be fed into an auditing environment to allow auditors to fix the system’s precision (an easy task) and recall (much harder) errors. These types of systems would also be of value in cases in which the information is too vast for the users to be able to read all of it; hence, even a partially correct IE system would be preferable to the alternative of not obtaining any potentially relevant information. In general, IE systems are useful if the following conditions are met: ■The information to be extracted is specified explicitly and no further inference is needed. ■A small number of templates are sufficient to summarize the relevant parts of the document. ■The needed information is expressed relatively locally in the text (check Bagga and Biermann 2000). As a first step in tagging documents for text mining systems, each document is processed to find (i.e., extract) entities and relationships that are likely to be meaning-ful and content-bearing. The term relationships here denotes facts or events involving certain entities. By way of example, a possible event might be a company’s entering into a joint venture to develop a new drug. An example of a fact would be the knowledge that a gene causes a certain disease. Facts are static and usually do not change; events are more dynamic and generally have a specific time stamp associated with them. The extracted information provides more concise and precise data for the mining process than the more naive word-based approaches such as those used for text categorization, and the information tends to represent concepts and relationships that are more meaningful and relate directly to the examined document’s domain. 94 VI.1 Introduction to Information Extraction 95                 Figure VI.1. Schematic view of the information extraction process. Consequently, IE methods allow for mining of the actual information present within the text rather than the limited set of tags associated with the documents. The IE process makes the number of different relevant entities and relationships on which the text mining is performed unbounded – typically thousands or even millions, which would be far beyond the number of tags any automated catego-rization system could handle. Thus, preprocessing techniques involving IE tend to create more rich and flexible representation models for documents in text mining systems. IE can be seen as a limited form of “complete text comprehension.” No attempt is made to understand the document at hand fully. Instead, one defines a priori the types of semantic information to be extracted from the document. IE represents documents as sets of entities and frames that are another way of formally describing the relationships between the entities. The set of all possible entities and frames is usually open and very big compared with the set of categorization keywords. It cannot be created manually. Instead, the features are extracted directly from the text. The hierarchy relation between the entities and frames is usually a simple tree. The root has several children – the entity types (e.g., “Company,” “Person,” “Gene,” etc.) under which the actual entities are automatically added as they are being discovered. The frames constitute structured objects, and so they cannot be directly used as features for text mining. Instead, the frame attributes and its label are used for features. The frame itself, however, may bypass the regular text mining operations and may be fed directly to the querying and visualization components. The simplest kind of information extraction is called term extraction. There are no frames, and there is only one entity type – simply “term.” Figure VI.1 gives a schematic view of the IE process. At the heart of the process we have the IE engine that takes a set of documents as input. The engine works by using a statistical model, a rule module, or a mix of both. 96 Information Extraction The output of the engine is a set of annotated frames extracted from the docu-ments. The frames actually populate a table in which the fields of the frame are the rows of the table. VI.1.1 Elements That Can Be Extracted from Text There are four basic types of elements that can, at present, be extracted from text. ■Entities.Entitiesarethebasicbuildingblocksthatcanbefoundintextdocuments. Examples include people, companies, locations, genes, and drugs. ■Attributes. Attributes are features of the extracted entities. Some examples of attributes are the title of a person, the age of a person, and the type of an orga-nization. ■Facts. Facts are the relations that exist between entities. Some examples are an employment relationship between a person and a company or phosphorylation between two proteins. ■Events. An event is an activity or occurrence of interest in which entities partici-pate such as a terrorist act, a merger between two companies, a birthday and so on. Figure VI.2 shows a full news article that demonstrates several tagged entities and relationships. VI.2 HISTORICAL EVOLUTION OF IE: THE MESSAGE UNDERSTANDING CONFERENCES AND TIPSTER The Defense Advanced Research Project Agency (DARPA) has been sponsoring efforts to codify and expand IE tasks, and the most comprehensive work has arisen from MUC-6 (Message Understanding Conference) and MUC-7 conferences. We now describe the various tasks introduced during the MUC conferences. VI.2.1 Named Entity Recognition The named entity recognition (NE, sometimes denoted also as NER) phase is the basic task-oriented phase of any IE system. During this phase the system tries to identify all mentions of proper names and quantities in the text such as the following types taken from MUC-7: ■People names, geographic locations, and organizations; ■Dates and times; and ■Monetary amounts and percentages. The accuracy (F1) of the extraction results obtained on the NE task is usually quite high, and the best systems manage to get even up to 95-percent breakeven between precision and recall. The NE task is weakly domain dependent – that is, changing the domain of the texts being analyzed may or may not induce degradation of the performance levels. Performance will mainly depend on the level of generalization used while developing the NE engine and on the similarity between the domains. VI.2 Historical Evolution of IE 97 Figure VI.2. A tagged news article. Proper names usually account for 70 percent of the named entities in the MUC corpuses, dates and times account for 25 percent, and monetary amounts and per-centages account for less than 5 percent of the total named entities. Out of the named entities, about 45–50 percent are organization names, 12–32 percent are location tags, and 23–39 percent are people tags. The MUC committee stipulated that the following types of noun phrases should not be extracted because they do not refer to any specific entity: ■Artifacts (e.g., Wall Street Journal, MTV, etc.), ■Common nouns used in anaphoric reference (such as the plane, the company, etc.), ■Names of groups of people and laws named after people (e.g., Republicans, “Gramm–Rudman amendment,” “the Nobel Prize,” etc.), ■Adjectival forms of location names (e.g., “American,” “Japanese,” etc.), and ■Miscellaneous uses of numbers that are not specifically currency or percentages. 98 Information Extraction VI.2.2 Template Element Task Template element tasks (TEs) are independent or neutral with respect to scenario or domain. Each TE consists of a generic object and some attributes that describe it. This enables separating domain-independent from domain-dependent aspects of extraction. The TE following types were included in MUC-7: ■Person ■Organization ■Location (airport, city, country, province, region, water) ■Artifact. Here are examples of TEs. A typical paragraph of text from a press release is as follows below (taken from < projects/ muc/>): Fletcher Maddox, former Dean of the UCSD Business School, announced the formation of La Jolla Genomatics together with his two sons. La Jolla Genomat-ics will release its product Geninfo in June 1999. L.J.G. is headquartered in the Maddox family’s hometown of La Jolla, CA. One can extract various entities and descriptors. For instance, some of the entities and descriptors that can be automatically extracted from this paragraph by using information extraction algorithms include the following: entity { ID = 1, NAME = “Fletcher Maddox” DESCRIPTOR = “Former Dean of USCD Business School” CATEGORY = person } entity { ID = 2 NAME = “La Jolla Genomatics” ALIAS = “LJG” DESCRIPTOR = “” CATEGORY = organization } entity { ID = 3 NAME = “La Jolla” DESCRIPTOR = “the Maddox family hometown” CATEGORY = location } VI.2 Historical Evolution of IE 99 VI.2.3 Template Relationship (TR) Task The Template relationship task (TR) expresses a domain-independent relationship between entities as compared with TEs, which just identify entities themselves. The goal of the TR task is to find the relationships that exist between the template ele-ments extracted from the text (during the TE task). Just like the definition of an entity, entity attributes depend on the problem and the nature of the texts being analyzed; the relationships that may exist between template elements is domain dependent too. For example, persons and companies may be related by employee of relation, companies and locations may be related by located of relations, and companies may be interrelated by subdivision of relations. The following TRs were extracted from the sample text: employee of (Fletcher Maddox, UCSD Business School) employee of (Fletcher Maddox, La Jolla Genomatics) product of (Geninfo, La Jolla Genomatics) location of (La Jolla, La Jolla Genomatics) location of (CA, La Jolla Genomatics) VI.2.4 Scenario Template (ST) Scenario templates (STs) express domain and task-specific entities and relations. The main purpose of the ST tasks is to test portability to new extraction problems quickly. This task gives advantage to technologies that are not so labor intensive and hence can port the extraction engine to a new domain in a short time (couple of weeks). Here are a few events that were extracted from the sample text: company-formation-event { PRINCIPAL = “Fletcher Maddox” DATE = “” CAPITAL = “” } product-release-event { COMPANY = “La Jolla Genomatics” PRODUCS = “Geninfo” DATE = “June 1999” COST = “” } VI.2.5 Coreference Task (CO) The coreference task (CO) captures information on coreferring expressions (e.g., pronouns or any other mentions of a given entity), including those tagged in the NE, TE tasks. This CO focuses on the IDENTITY (IDENT) relation, which is symmet-rical and transitive. It creates equivalence classes (or coreference chains) used for scoring. The task is to mark nouns, noun phrases, and pronouns. 100 Information Extraction “It's a chance to think about first-level q u e s t i o n s " , s a i d M s . < e n a m e x t y p e = "PERSON">Cohn</enamex>, a partner in the <enamex type= "ORGANIZATION"> McGlashan & Sarrail</enamex> firm in <enamex type= "LOCATION">San Mateo</enamex>, Senamex type= "LOCATION">Calif.</enamex> Figure VI.3. MUC-style annotation. Consider the following sentence: David1 came home from school, and saw his1 mother2, Rachel2. She2 told him1 that his1 father will be late. The correctly identified pronominal coreference chains are (David1, his1, him1, his1) and (mother2, Rachel2, She2). This is not a high-accuracy task for IE systems but properly resolving some kinds of coreference is usually difficult even for humans annotators, who achieved about 80 percent. An MUC-style tagging is shown in Figure VI.3, and a sample template extracted from that text fragment is shown in Figure VI.4. VI.2.6 Some Notes about IE Evaluation We follow here the discussion of Lavelli et al. (2004) about various problems in the common evaluation methodology of information extraction. The main prob-lem is that it is very hard to compare different IE experiments without comparing the exact settings of each experiment. In particular the following problems were raised: ■The exact split between the training set and test set: considering both the propor-tions between the two sets (e.g., a 50/50 versus a 90/10 split) and the repetition procedure adopted in the experiment (e.g., a single specific split between training and test versus n repeated random splits versus n-fold cross-validations). ■Determining the test set: the test set for each point on the learning curve can be the same (hold-out set) or be different and based on the exact split. ■What constitutes an exact match: how to treat an extraneous or a missing comma – that is, should it be counted as a mistake or is it close enough and does not miss any critical information. ■Feature Selection: many different types of features can be used, including ortho-graphic features, linguistic features (such as POS, stemming, etc.), and semantic features based on external ontologies. In order to compare any two algorithms properly they must operate on the same set of features. <ORGANIZATION-9303020074-1> := ORG_NAME: "McGlashan & Sarrail" ORG_ALIAS: "M & S" ORG_LEADER: <PERSON-9303020074-57> ORG_TYPE: COMPANY Figure VI.4. MUC-style templates. VI.3 IE Examples 101 Counting the Correct Results ■Exact Matches: Instances generated by the extractor that perfectly match actual instances annotated by the domain expert. ■Contained Matches: Instances generated by the extractor that contain actual instances annotated by the domain expert and some padding from both sides. ■Overlapped Matches: Instances generated by the extractor that overlap actual instances annotated by the domain expert (at least one word is in the intersection of the instances). Another aspect is how to treat entities that appear multiple times in a document. One option is to extract all of them, and then any omission will result in lower recall. Another option is to extract each entity just once; hence, it is enough just to identify one occurrence of each entity. There are situations in which the latter option will actually make sense if we are just interested in knowing which entities appear in each document (and we do not care how many times it appears). VI.3 IE EXAMPLES This section provides several real-world examples of input documents and the results obtained by performing information extraction on them. The examples have been culled from a variety of domains and demonstrate a broad range tagging standards to give the reader an exposure to the different ways to approach coding the information exaction process. VI.3.1 Case 1: Simplistic Tagging, News Domain Consider a system that extracts business events from news articles. Such a system is useful for business analysts or even casual users interested in keeping abreast of the current business events. Consider the following text fragment: “TeliaSonera, the Nordic region’s largest telecoms operator, was formed in 2002 from the cross-border merger between Telia and Finland’s Sonera,” One can extract the following frame from it: FrameName: Merger Company1: Telia Company2: Sonera New Company: TeliaSonera This frame actually provides a concise summary of the previous text fragment. The following cases will show the types of summary information that can be extracted from other text fragments. VI.3.2 Case 2: Natural Disasters Domain 4 Apr Dallas – Early last evening, a tornado swept through an area northwest of Dallas, causing extensive damage. Witnesses confirm that the twister occurred without warning at approximately 7:15 p.m. and destroyed the mobile homes. 102 Information Extraction The Texaco station, at 102 Main Street, Farmers Branch, TX, was severely dam-aged, but no injuries were reported. Total property damages are estimated at $350,000. Event: tornado Date: 4/3/97 Time: 19:15 Location: Farmers Branch : “northwest of Dallas” : TX : USA Damage: mobile homes Texaco station Estimated Losses: $350,000 Injuries: none VI.3.3 Case 3: Terror-Related Article, MUC-4 19 March – a bomb went off this morning near a power tower in San Salvador leav-ing a large part of the city without energy, but no causalities have been reported. According to unofficial sources, the bomb – allegedly detonated by urban guerrilla commandos – blew up a power tower in the northwestern part of San Salvador at 0650 (1250 GMT). Incident Type: Bombing Date: March 19th Location: El Salvador: San Salvador (City) Perpetrator: urban guerrilla commandos Physical Target: power tower Human Target: – Effect of Physical Target: destroyed Effect on Human Target: no injury or death Instrument bomb VI.3.4 Technology-Related Article, TIPSTER-Style Tagging Here is an article from the MUC-5 evaluation dealing with microelectronics. <doc> <REFNO> 000019641 </REFNO> <DOCNO> 3560177 </DOCNO> <DD> November 25, 1991 </DD> <SO> News Release </SO> <TXT> Applied Materials, Inc. today announced its newest source technology, called the Durasource, for the Endura(TM) 5500 PVD system. This enhanced source includes new magnet configurations, giving the industry’s most advanced VI.3 IE Examples 103 sputtered aluminum step coverage in sub-micron contacts, and a new one piece target that more than doubles target life to approximately 8000 microns of deposition compared to conventional two-piece “bonded” targets. The Dura-source enhancement is fully retrofittable to installed Endura 5500 PVD sys-tems. The Durasource technology has been specially designed for 200 mm wafer applications, although it is also available for 125 mm and 1s0mm wafer sizes. For example, step coverage symmetry is maintained within 3% between the inner and outer walls of contacts across a 200 mm wafer. Film thickness uniformity averages 3% (3 sigma) over the life of the target. </TXT> </doc> <TEMPLATE-3560177-1> := DOC NR: 3560177 DOC DATE: 251192 DOCUMENT SOURCE: “News Release” CONTENT: <MICROELECTRONICS CAPABILITY-3560177-1> DATE TEMPLATE COMPLETED: 021292 EXTRACTION TIME: 5 COMMENT: “article focuses on nonreportable target source but reportable info available” /“TOOL VERSION: LOCKE.3.4” /“FILLRULES VERSION: EME.4.0” <MICROELECTRONICS CAPABILITY-3560177-1> := PROCESS: <LAYERING-3560177-1> MANUFACTURER: <ENTITY-3560177-1> <ENTITY-3560177-1> := NAME: Applied Materials, INC TYPE: COMPANY <LAYERING-3560177-1> := TYPE: SPUTTERING FILM: ALUMINUM EQUIPMENT: <EQUIPMENT-3560177-1> <EQUIPMENT-3560177-1> := NAME OR MODEL: “Endura(TM) 5500” MANUFACTURER: <ENTITY-3560177-1> EQUIPMENT TYPE: PVD SYSTEM STATUS: IN USE WAFER SIZE: (200 MM) (125 MM) COMMENT: “actually three wafer sizes, third is error 1s0mm” 104 Information Extraction VI.3.5 Case 5: Comprehensive Stage-by-Stage Example ■Original Sentence: Mr. Eskew was Vice President of Worldwide Sales for Sand-piper Networks, which was recently acquired by Digital Island where he created the worldwide sales strategy. ■After Part of Speech Tagging: <Prop>Mr. Eskew</Prop> <Verb>was</Verb> <Prop>Vice President</Prop> <Prep>of</Prep> <Prop>Worldwide Sales</Prop> <Prep>for</Prep> <Prop>Sandpiper Networks</Prop> which <Verb>was</Verb> <Adv>recently</Adv> <Verb>acquired</Verb> <Prep>by</Prep> <Prop>Digital Island</Prop> where <Pron>he</Pron> <Verb>created</Verb> <Det>the</Det> <Adj>worldwide</Adj> <Nn>sales strategy.</Nn> ■After Shallow Parsing: NP:{Mr.Eskew}wasNP:{VicePresidentofWorldwideSales}forNP:{Sandpiper Networks} which was ADV:{recently} V:{acquired} by NP:{Digital Island} where NP:{he} V:{created} NP:{the worldwide sales strategy} ■After Named Entity Recognition: Person:{Mr. Eskew} was Position:{Vice President of Worldwide Sales} for Company:{Sandpiper Networks} which was ADV:{recently} V:{acquired} by Company:{Digital Island} where Person:{he} V:{created} NP:{the worldwide sales strategy} ■After Merging (Anaphora Resolution): Person:{Mr. Eskew} was Position:{Vice President of Worldwide Sales} for Company:{Sandpiper Networks} which was ADV:{recently} V:{acquired} by Company:{Digital Island} where Person:{Mr. Eskew} V:{created} NP:{the worldwide sales strategy} ■Frames Extracted: Frame Type: Acquisition Acquiring Company: Digital Island Acquired Company: Sandpiper Networks Acquisition Status: Historic FrameType: PersonPositionCompany Person: Mr. Eskew Position: Vice President of Worldwide Sales Company: Sandpiper Networks Status: Past VI.4 ARCHITECTURE OF IE SYSTEMS Figure VI.5 shows the generalized architecture for a basic IE system of the type that would be used for text mining preprocessing activities. The subcomponents are colored according to their necessity within the full system. A typical general-use IE system has three to four major components. The first component is a tokenization or zoning module, which splits an input doc-ument into its basic building blocks. The typical building blocks are words, VI.4 Architecture of IE Systems 105 Tokenization Morphological and Lexical Analysis Syntactic Analysis Domain Analysis Zoning Part of Speech Tagging Sense Disambiguiation Deep Parsing Shallow Parsing Anaphora Resolution Integration Must Advisable Nice to have Can pass Figure VI.5. Architecture of a typical information extraction system. sentences and paragraphs. Rarely we may have higher building blocks like sections and chapters. The second component is a module for performing morphological and lexical analysis. This module handles activities such as assigning POS tags to the document’s various words, creating basic phrases (like noun phrases and verb phrases), and disambiguating the sense of ambiguous words and phrases. The third component is a module for syntactic analysis. This part of an IE system establishes the connection between the different parts of each sentence. This is done either by doing full parsing or shallow parsing. A fourth and increasingly more common component in IE systems performs what might be termed domain analysis, which is a function in which the system combines all theinformationcollectedfromthepreviouscomponentsandcreatescompleteframes that describe relationships between entities. Advanced domain analysis modules also possess an anaphora resolution component. Anaphora resolution concerns itself with resolving indirect (and usually pronomic) references for entities that may appear in sentences other than the one containing the primary direct reference to an entity. VI.4.1 Information Flow in an IE System Most information extraction systems use a deterministic bottom-up approach to analyzing the document. Initially, one identifies the low-level elements and then identifies higher level features that are based on the low-level features identified in the previous phases. 106 Information Extraction Processing the Initial Lexical Content: Tokenization and Lexical Analysis The first two phases of an IE system really both concern themselves with processing a document to identify various elements of its basic lexical content. As a first pass, a document is divided into tokens, sentences, and possibly paragraphs. Then, each word is tagged by its part of speech and lemma. In addition, an IE system can use specialized dictionaries and gazetteers to tag words that appear in those word lists. Typical dictionaries include names of coun-tries, cities, people’s first names, public companies, company suffixes, common titles in companies, and so on. Dictionary support during initial tagging creates richer document representations. For example, using appropriate dictioniaries, the word “Robert” would be tagged as a “first name,” “IBM” would be tagged as a company, and the acronym “spa” could be tagged as “company suffix.” Proper Name Identification Commonly, the next phase is proper name identification. After an IE system per-forms the basic lexical analysis, it is typically designed to try to identify a variety of simple entity types such as dates, times, e-mail address, organizations, people names, locations, and so on. The entities are identified by using regular expressions that utilize the context around the proper names to identify their type. The regular expressions can use POS tags, syntactic features, and orthographic features such as capitalization. Proper name identification is performed by scanning the words in the sentence while trying to match one of the patterns in the predefined set of regular expressions. Each proper name type has its associated set of regular expressions. All patterns are attempted for each word. If more than one pattern is matched, the IE system picks the pattern that matches the longest word sequence. If there is a tie, the IE system usually just uses the first pattern. If no pattern matches, the IE system moves to the next word and reapplies the entire set of patterns. The process continues until the end of the sentence is reached. To illustrate how such regular expressions are constructed, we present several regular expressions for identifying people names below. 1. @Honorific CapitalizedWord CapitalizedWord a. @Honorific is a list of honorific titles such as {Dr., Prof., Mr., Ms., Mrs. etc.) b. Example: Mr. John Edwards 2. @FirstNames CapitalizedWord a. @FirstNames is a list of common first names collected from sites like the U.S. census and other relevant sites b. Example: Bill Hellman 3. CapitalizedWord CapitalizedWord [,] @PersonSuffix a. @PersonSuffix is a list of common suffixes such as {Jr., Sr., II, III, etc.} b. Example: Mark Green, Jr. 4. CapitalizedWord CapitalLetter [.] CapitalizedWord a. CapitalLetter followed by an optional period is a middle initial of a person and a strong indicator that this is a person name b. Example: Nancy M. Goldberg VI.4 Architecture of IE Systems 107 Element Grammatical Function Type E1 NP Company E2 VG E3 NP Person E4 NP Position E5 NP Position E6 NP Company E7 NP Location E8 VG E9 NP Position Figure VI.6. Identifying a text element’s grammatic function and type. 5. CapitalizedWord CapitalLetter @PersonVerbs a. @PersonVerbs is a list of common verbs that are strongly associated with people such as {said, met, walked, etc.} A more expansive treatment of the topic of manual rule writing and pattern development is offered in Appendix A. Shallow Parsing After identifying the basic entities, an IE system moves to shallow parsing and iden-tification of noun and verb groups. These elements will be used as building blocks for the next phase that identifies relations between these elements. As an example, consider the following annotated text fragment: Associated Builders and Contractors (ABC)E1 today announcedE2 that Bob PiperE3, co-ownerE4 and vice president of corporate operationsE5, Piper Elec-tric Co., Inc.E6, Arvada, Colo.E7, has been namedE8 vice president of workforce developmentE9. Essentially, at this point, an IE system focuses on creating a comprehensive listing of the types of elements found in such a text fragment in the manner shown in Figure VI.6. The next step performed by an IE system is the construction of noun groups based on the noun phrases (NPs) that were constructed before. The construction is based on common patterns developed manually. Essentially, this works in the following manner. On the basis of a few typical patterns such as 1. Position and Position, Company 2. Company, Location, one can construct the following noun groups (NGs): 1. co-ownerE4 and vice president of corporate operationsE5, Piper Electric Co., Inc.E6 2. Piper Electric Co., Inc.E6, Arvada, Colo.E7. Already, even at the conclusion of the initial tokenization and lexical analysis stages of the IE system’s preprocessing operations, a relatively rich amount of struc-ture has been created to represent the text of a particular document. This structure 108 Information Extraction will be useful as a building block for further phases of IE-related preprocessing operations. Building Relations The construction of relations between entities is done by using domain-specific pat-terns. The generality of the patterns depends on the depth of the linguistic analysis performed at the sentence level. If one just performs this analysis against individual noun or verb phrases, or both, then one will need to develop five to six times more patterns than if simply the subject, verb, and object of each sentence were identified. To extract an executive appointment event from the text fragment above, one could use the following pattern: Company [Temporal] @Announce Connector Person PersonDetails @Appoint Position This pattern can be broken down in the following way: ■Temporal is a phrase indicating a specific date and/or time such as {yesterday, today, tomorrow, last week, an hour ago} ■@Announce is a set of phrases that correspond to the activity of making a public announcement like {announced, notified, etc.} ■Connector is a set of connecting words like {that, . . . } ■PersonDetails is a phrase describing some fact about a person (such as his or her age, current position, etc.); it will be usually surrounded by commas ■@Appoint is a set of phrases that correspond to the activity of appointing a person to a position like {appointed, named, nominated, etc.} One of the main tasks during the relation extraction is coreference resolu-tion, which is introduced in Section VI.5. We expand on coreference resolution in Section VI.5. Inferencing In many cases, an IE system has to resort to some kind of common sense reasoning and infer missing values to complete the identification of events. The inference rules are written as a formalism similar to Prolog clauses. Common examples include family relations, management changes, spatial relations, and so on. Below are two examples, one related to location of a person and the other to the position a person is going to fill. The first example is a simple two-sentence text fragment. Example 1: John Edgar was reported to live with Nancy Leroy. His Address is 101 Forest Rd., Bethlehem, PA. From this, it is possible to extract the following entities and events: 1. person(John Edgar) 2. person(Nancy Leroy) 3. livetogether(John Edgar, Nancy Leroy) 4. address(John Edgar, 101 Forest Rd., Bethlehem, PA) VI.5 Anaphora Resolution 109 Using the following rule, one can infer that Nancy Leroy lives at 101 Forest Rd., Bethlehem, PA. address(P1,A) :- person(P1), person(P2), livetogether(P1,P2), address(P1,A). The second example is also a two-sentence text fragment. Example 2: RedCarpet Inc. announced that its President, JayGoldman, has resigned. The company appointed Fred Robbins to the position. From this one can extract the following entities and events: 1. company(RedCarpet) 2. person(Jay Goldman) 3. personLeftPosition(Jay Goldman, RedCarpet, President) 4. personReplacesPerson(Fred Robbins, Jay Goldman) Using the following rule in this second example, one can infer that Fred Robbins is the new President of RedCarpet: newposition(P2,Pos) :- person(P1), person(P2), company(C1), personLeftPosi-tion(P1,C1,Pos), personReplacesPerson (P2,P1). VI.5 ANAPHORA RESOLUTION Anaphora or coreference resolution is the process of matching pairs of NLP expres-sions that refer to the same entity in the real world. It is a process that is critical to the proper function of advanced text mining preprocessing systems. Below is an example of an annotated text fragment that includes chains of coref-fering phrases. We can see here two chains referring to a person (#1, #5), one chain referring to an incident (#2), one chain referring to groups of people (#4), two chains referring to locations (#3, #7), and one chain referring to an organization (#6). HADERA, Israel3 (AP) – A Palestinian gunman1 walked into a wedding hall in northern Israel3 late Thursday and opened fire, killing six people and injuring 302, police6 said. . . . Police6 earlier said the attacker1 threw hand grenades but witnesses and later police6 accounts said the attacker1 opened fire with an M-16 and was1 stopped before he1 could throw a grenade. The Al Aqsa Brigades4, a militia4 linked to Yasser Arafat’s Fatah claimed responsibility. The group4 said that Abed Hassouna1 from a village7 near the Palestinian town of Nablus carried out the attack2 to avenge the death of Raed Karmi5, (the militia)4’s leader5 in the town of Tulkarem. Hassouna1 had been a Palestinian policeman1 but left1 the force two years ago, residents of his1 village7 said. There are two main approaches to anaphora resolution. One is a knowledge-based approach based on linguistic analysis of the sentences and is coded as a rigid algorithm. The other approach is a machine learning approach based on an annotated corpus. 110 Information Extraction VI.5.1 Pronominal Anaphora Pronominal anaphora deals with resolving pronouns such as he, she, and they, It is the most common type of coreference. There are three types of pronouns: ■Reflexive pronouns: himself, herself ■Personal pronouns: he, him, you ■Possessive pronouns: her, his, hers It should be pointed out that not all pronouns in English are anaphoric. For instance, “it” can often be nonanaphoric as in the case of the previous sentence. Other examples of nonanaphoric “it” include expressions such as “It is important,” “Itisnecessary,”or“Ithastobetakenintoaccount.”Anonanaphoric“it”isdescribed as pleonastic. VI.5.2 Proper Names Coreference The task here is to link together all the variations of a proper name (person, organi-zation, location) that are observed in text. For example, Former President Bush1 defended the U.S. military Thursday during a speech at one of the nation’s largest Army posts, where one private accused of abusing Iraqi prisoners awaits a court-martial. “These are difficult times for the Army as the actions of a handful in Iraq violate the soldier’s code,” said George H. W. Bush1. Additional examples can be observed in the example above. VI.5.3 Apposition Appositives are used to provide auxiliary information for a named entity. This infor-mation is separated from the entity by a comma and either precedes it or comes directly after it as in the following example: said George H. W. Bush1, the father of President Bush1. the father of President Bush1, George H. W. Bush1 said. . . . A necessary condition for an appositional phrase to corefer to a named entity is that they occur in different noun phrases. If the apposition is a modifier of the named entity within the same noun phrase, then they are not considered coreferring as in the following phrase “Former President Bush.” In this case Former President is not coreferring to Bush. VI.5.4 Predicate Nominative A predicate nominative occurs after a copulative verb (is, seems, looks like, appears, etc.) and completes a reference to the subject of a clause. An example follows: Bill Gates1 is the Chairman of Microsoft Corporation1 Subject: Bill Gates Predicate Nominative: the Chairman of Microsoft Corporation VI.5 Anaphora Resolution 111 A predicate nominative is a candidate for coreference only if it is stated in a firm way. If it is stated in a speculative or negative way, then it is not a candidate for coreference. VI.5.5 Identical Sets In this type of coreference the anaphor and the antecedent both refer to sets that are identical or to identical types. In the following example, “The Al Aqsa Brigades,” “a militia,” and “The group” all refer to the same set of people. The Al Aqsa Brigades4, a militia4 linked to Yasser Arafat’s Fatah claimed respon-sibility. The group4 said that Abed Hassouna1 from a village7 near the Palestinian town of Nablus carried out the attack2 to avenge the death of Raed Karmi5, (the militia)4’s leader5 Identifying identical sets is usually extremely difficult because deep knowledge about the domain is needed. If we have a lexical dictionary such as WordNet that include hyponyms and hyper-nyms, we may be able to identify identical sets. We can deduce, for instance, that “militia” is a kind of “group.” VI.5.6 Function–Value Coreference A function–value coreference is characterized by phrases that have a function–value relationship. Typically, the function will be descriptive and the value will be numeric. In the following text there are two function–value pairs: Evolved Digital Systems’s Revenues1 were $4.1M1 for the quarter, up 61% com-pared to the first quarter of 2003. Net Loss2 declined by 34% to $5.6M2. Function: Evolved Digital Systems’s Revenues Value: $4.1M Function: Net Loss Value: $5.6M VI.5.7 Ordinal Anaphora Ordinal anaphora involves a cardinal number like first or second or an adjective such as former or latter as in the following example: IBM and Microsoft1 were the final candidates, but the agency preferred the latter company1. VI.5.8 One-Anaphora A one-anaphora consists of an anaphoric expression realized by a noun phrase con-taining the word “one” as in the following: If you cannot attend a tutorial1 in the morning, you can go for an afternoon one1. 112 Information Extraction VI.5.9 Part–Whole Coreference Part–whole coreference occurs when the anaphor refers to a part of the antecedent as in the following: John has bought a new car1. The indicators1 use the latest laser technology. As in the case of identifying identical sets discussed in Section VI.5.5, a lexical resource such WordNet is needed. In particular, WordNet includes the meronymy– holonymy relationship, which can help us identify that indicators are a part of a car. VI.5.10 Approaches to Anaphora Resolution Most of the work on coreference resolution focuses on pronominal resolution because that is the most common type of coreference and is also one of the eas-ier types to resolve. Most of the approaches to pronominal resolution share a common overall structure: ■Identify the relevant paragraphs (or sentences) around the pronoun in which one will search for candidates antecedents. ■Using a set of consistency checks, delete the candidates that to do meet any of the checks. ■Assign salience values to each of the surviving candidates according to a set of predefined rules. ■Pick the candidate with the highest salience value. Some of these approaches require heavy preprocessing and rely on full parsers, whereas others are fairly knowledge-poor and rely on shallow parsing. The focus here will be on approaches that do not require full parsing of the sentences because doing this is too time-consuming and hence prohibitive in a real-world IE system. VI.5.10.1 Hobbs Algorithm The most simplistic algorithm is the Hobbs algorithm, which is also called the Naive algorithm (Hobbs 1986). This algorithm works by specifying a total order on noun phrases in the prior discourse and comparing each noun phrase against a set of constraints imposed by the features of the anaphor (i.e., gender, number). The first antecedent to satisfy all the constraints is chosen. A few points to note about this algorithm are as follows: ■For two candidate antecedents a and b, if a is encountered before b in the search space, then a is preferred over b. ■The salience given to the candidate antecedents imposes a total ordering on the antecedents – that is, no two antecedents will have the same salience. ■The algorithm can not handle ambiguity and will resolve a pronoun as if there were at least one possible antecedent. VI.5 Anaphora Resolution 113 VI.5.11 CogNIAC (Baldwin 1995) CogNIAC is a pronoun resolution engine designed around the assumption that there is a subclass of anaphora that does not require general purpose reasoning. Among the kinds of information CogNIAC does require are POS tagging, simple noun phrase recognition, and basic semantic category information like gender and number. The system is based on a set of high-confidence rules that are successively applied over the pronoun under consideration. The rules are ordered according to their importance and relevance to anaphora resolution. The processing of a pronoun stops when one rule is satisfied. Below are listed the six rules used by the system. For each of them, the sentence prefix of anaphor is defined as the text portion of the sentence from the beginning of the sentence to the position of the anaphor. 1. Unique Antecedent. Condition: If there is a single valid antecedent A in the relevant discourse. Action: A is the selected antecedent. 2. Reflexive Pronoun. Condition: If the anaphor is a reflexive pronoun. Action: Pick the nearest valid antecedent in the anaphor prefix of the current sentence. 3. Unique in Current + Preceding. Condition: If there is a single valid antecedent A in the preceding sentence and anaphor prefix of the current sentence. Action: A is the selected antecedent. Example: Rupert Murdock’s News Corp. confirmed his interest in buying back the ailing New York Post. But analysts said that if he winds up bidding for the paper, . . . . 4. Possessive Pronoun. Condition: If the anaphor is a possessive pronoun and there is a single exact copy of the possessive phrase in the previous sentence. Action: The antecedent of the latter copy is the same as the former. Example: After he was dry, Joe carefully laid out the damp towel in front of his locker. Travis went over to his locker, took out a towel and started to dry off. 5. Unique in Current Sentence. Condition: If there is a single valid antecedent A in the anaphor-prefix of the current sentence Action: A is the selected antecedent. 6. Unique Subject–Subject Pronoun. Condition: If the subject of the previous sentence is a valid antecedent A and the anaphor is the subject of the current sentence. Action: A is the selected antecedent. VI.5.11.1 Kennedy and Boguraev This approach is based on Lappin and Leass’s (1994) method but without the need for full parsing. This algorithm was used to resolve personal pronouns, reflexives, and possessives. The algorithm works by constructing coreference equivalence classes. 114 Information Extraction Each such class has a salience that is computed based on a set of 10 factors. Each pronoun is resolved to the antecedent that belongs to the class with the highest salience. Here are the factors used by the salience algorithm. All the conditions refer to the current candidate for which we want to assign salience. GFUN is the grammatical function of the candidate SENT-S: 100 iff in the current sentence CNTX-S: 50 iff in the current context SUBJ-S: 80 iff GFUN = subject EXST-S: 70 iff in an existential construction POSS-S: 65 iff GFUN = possessive ACC-S: 50 iff GFUN = direct object DAT-S: 40 iff GFUN = indirect object OBLQ-S: 30 iff the complement of a preposition HEAD-S: 80 iff EMBED = NIL ARG-S: 50 iff ADJUNCT = NIL As an example of the salience assignment, consider the following text fragment: Sun’s prototype Internet access device uses a 110-Mhz MicroSPARCprocessor, and is diskless. Its dimensions are 5.5 inches × 9 inches × 2 inches. Anaphors and candidates are represented using their offset in the text (from the beginning of the document), their grammatical function, and several other syntactic features. The structure of each candidate is Element: Offset/Salience ANAPHOR: Its : 347 CANDIDATES: Internet access device: 335/180 (=50+85+50) MicroSPARCprocessor: 341/165 (=50+65+50) Sun’s: 333/140 (=50+40+50) The first sentence in this fragment includes three candidates with different gram-matical functions. The second sentence, which includes that anaphor, does not include any candidate satisfying the basic constraints. The three candidates in the first sen-tence are ranked according to their salience. The main factor determining the salience is the grammatical function of each candidate. Internet access device is the subject of the sentence and hence satisfies the SUBJ-S condition, is the optimal candidate, and is selected as the antecedent of Its. VI.5.11.2 Mitkov In contrast to the previous approaches that use mostly positive rules, Mitkov’s approach (Mitkov 1998) is based on a set of boosting and impeding indicators applied to each candidate. The approach takes as an input the output of a text processed by a part-of-speech tagger, identifies the noun phrases that precede the anaphor within a distance of two-sentences, checks them for gender and number agreement with the anaphor, and then applies the genre-specific antecedent indicators to the remaining candidates. VI.5 Anaphora Resolution 115 The boosting indicators assign a positive score to a matching candidate, reflecting a positive likelihood that it is the antecedent of the current pronoun. In contrast, the impeding indicators apply a negative score to the matching candidate, reflecting a lack of confidence that it is the antecedent of the current pronoun. The candidate with the highest combined score is selected. Here are some of the indicators used by Mitkov: ■Definiteness. Definite noun phrases in previous sentences are more likely antecedents of pronominal anaphors than indefinite ones (definite noun phrases score 0 and indefinite ones are penalized by −1). ■Givenness. Noun phrases in previous sentences representing the “given informa-tion” are deemed good candidates for antecedents and score 1 (candidates not representing the theme score 0). The given information is usually the first noun phrase in a nonimperative sentence. ■Indicating Verbs. If a verb in the sentence has a stem that is a member of {discuss, present, illustrate, identify, summarize, examine, describe, define, show, check, develop, review, report, outline, consider, investigate, explore, assess, analyze, synthesize, study, survey, deal, cover}, then the first NP following the verb is the preferred antecedent. ■Lexical Reiteration. Lexically reiterated noun phrases are preferred as candi-dates for antecedent (an NP scores 2 if is repeated within the same paragraph twice or more, 1 if repeated once, and 0 if it is not repeated). The matching is done in a loose way such that synonyms and NPs sharing the same head are considered identical. ■Section Heading Preference. If a noun phrase occurs in the heading of the sec-tion containing the current sentence, then we consider it the preferred candi-date. ■“Nonprepositional” Noun Phrases. A “nonprepositional” noun phrase is given a higher preference than a noun phrase that is part of a prepositional phrase (0, −1). Example: Insert the cassette into the VCR making sure it is suitable for the length of recording. Here VCR is penalized for being part of a prepositional phrase and is resolved to the cassette. ■Collocation Pattern Preference. This preference is given to candidates hav-ing an identical verb collocation pattern with a pronoun of the pattern “noun phrase (pronoun), verb” and “verb, noun phrase (pronoun).” Example: Press the key down and turn the volume up . . . Press it again. Here key is preferred antecedent because it shares the same verb (press) with the pronoun (“it”). ■Immediate Reference. Given a pattern of the form “ . . . You? V1 NP . . . con you? V2 it (con you? V3 it)”, where con ∈{and/or/before/after . . . }, the noun phrase immediately after V1 is a very likely candidate for the antecedent of the pronoun “it” immediately following V2 and is therefore given preference (scores 2 and 0). Example: To print the paper1, you can stand the printer2 up or lay it2 flat. To turn on the printer2, press the Power button3 and hold it3 down for a moment. Unwrap the the paper1, form it1 and align it1 then load it1 into the drawer. 116 Information Extraction ■Referential distance. In complex sentences, noun phrases receive the following scores based on how close they are to the anaphor: previous clause: 2 previous sentence: 1 2 sentences: 0 3 sentences further back: −1 In simple sentences, the scores are as follows: previous sentence: 1 2 sentences: 0 3 sentences further back: −1 ■Domain Terminology Preference.NPsrepresentingdomaintermsaremorelikely to be the antecedent (score 1 if the NP is a term and 0 if not). VI.5.11.3 Evaluation of Knowledge-Poor Approaches For many years, one of the main problems in contrasting the performance of the various systems and algorithms had been that there was no common ground on which such a comparison could reasonably be made. Each algorithm used a different set of documents and made different types of assumptions. To solve this problem, Barbu (Barbu and Mitkov 2001) proposed the idea of the “evaluation workbench” – an open-ended architecture that allows the incorporation of different algorithms and their comparison on the basis of the same preprocess-ing tools and data. The three algorithms just described were all implemented and compared using the same workbench. The three algorithms implemented receive as input the same representation of the input file. This representation is generated by running an XML parser over the file resulting from the preprocessing phase. Each noun phrase receives the following list of features: ■the original word form ■the lemma of the word or of the head of the noun phrase ■the starting and ending position in the text ■the part of speech ■the grammatical function (subject, object . . . ) ■the index of the sentence that contains the referent ■the index of the verb related to the referent. In addition, two definitions should be highlighted as follows: ■Precision = number of correctly resolved anaphors / number of anaphors attempted to be resolved ■Success Rate = number of correctly resolved anaphors / number of all anaphors. The overall results as reported in Mitkov are summarized in the following table: K&B Cogniac Mitkov Precision 52.84% 42.65% 48.81% Success 61.6% 49.72% 56.9% VI.5 Anaphora Resolution 117 VI.5.11.4 Machine Learning Approaches One of the learning approaches (Soon et al. 2001) is based on building a classifier based on the training examples in the annotated corpus. This classifier will be able to take any pair of NLP elements and return true if they refer to the same real-world entity and false otherwise. The NLP elements can be nouns, noun phrases, or pronouns and will be called markables. The markables are derived from the document by using the regular NLP prepro-cessing steps as outlined in the previous section (tokenization, zoning, part of speech tagging, noun phrase extraction and entity extraction). In addition to deriving the markables, the preprocessing steps make it possible to create a set of features for each of the markables. These features are used by the classifier to determine if any two markables have a coreference relation. Some Definitions ■Indefinite Noun Phrase. An indefinite noun phrase is a phrase that is used to introduce a specific object or set of objects believed to be new to the addressee (e.g., a new automobile, some sheep, and five accountants). ■Definite Noun Phrase. This is a noun phrase that starts with the article “the.” ■Demonstrative Noun Phrase. This is a noun phrase that starts with “this,” “that,” “those,” or “these.” Features of Each Pair of Markables ■Sentence Distance: 0 if the markables are in the same sentence. ■Pronoun: 1 if one of the markables is a pronoun; 0 otherwise. ■Exact Match: 1 if the two markables are identical; 0 otherwise. ■Definite Noun Phrase: 1 if one of the markables if a definite noun phrase; 0 otherwise. ■Demonstrative Noun Phrase: 1 if one of the markables is a demonstrative noun phrase. ■Number Agreement: 1 if the both markables are singular or plural; 0 otherwise. ■Semantic Agreement: 1 if the markables belong to the same semantic class (based on the entity extraction component). ■Gender Agreement: 1 if the two markables have the same gender (male, female), 0 if not, and 2 if it is unknown. ■Proper Name: 1 if both markables are proper names; 0 otherwise. ■Alias: 1 if one markable is an alias of the other entity (like GE and General Motors). Generating Training Examples ■Positive Examples. Assume that in a given document we have found four markables that refer to the same real-world entity, {M1,M2,M3,M4}. For each adjacent pair of markables we will generate a positive example. In this case, we will have three positive examples – namely {M1,M2}, {M2,M3} and {M3,M4}. ■Negative Examples. Assume that markables a,b,c appear between M1 and M2; then, we generate three negative examples {a,M2}, {b,M2}, {c,M2}. 118 Information Extraction The Algorithm Identify all markables For each anaphor A Let M1 to Mn be all markables from the beginning of the document till A For i=n;i=1;i - -if PairClassifier(A,Mi)=true then A, Mi is an anaphoric pair exit end if end for end for Evaluation Training on 30 documents yielded a classifier that was able to achieve precision of 68 percent and recall of 52 percent (F1 = 58.9%). Ng and Cardie (Ng and Cardie 2002) have suggested two types of extensions to the Soon et al. corpus-based approach. First, they applied three extralinguistic mod-ifications to the machine learning framework, which together provided substantial and statistically significant gains in coreference resolution precision. Second, they expanded the Soon et al. feature set from 12 features to an arguably deeper set of 53. Ng and Cardie have also proposed additional lexical, semantic, and knowledge-based features – most notably, 26 additional grammatical features that include a variety of linguistic constraints and preferences. The main modifications that were suggested by Ng and Cardie are as follows: ■Best-first Clustering. Rather than a right-to-left search from each anaphoric NP for the first coreferent NP, a right-to-left search for a highly likely antecedent was performed. As a result, the coreference clustering algorithm was modified to select as the antecedent of NP the NP with the highest coreference likelihood value from among preceding NPs with coreference class values above 0.5. ■Training Set Creation. Rather than generate a positive training example for each anaphoric NP and its closest antecedent, a positive training example was gener-ated for its most confident antecedent. For a nonpronominal NP, the closest non-pronominal preceding antecedent was selected as the most confident antecedent. For pronouns, the closest preceding antecedent was selected as the most confident antecedent. ■String Match Feature. Soon’s string match feature (SOON STR) tests whether the two NPs under consideration are the same string after removing determiners from each. Rather than using the same string match for all types of anaphors, finer granularity is used. Exact string match is likely to be a better coreference predictor for proper names than it is for pronouns, for example. Specifically, the SOON STR feature is replaced by three features – PRO STR, PN STR, and WORDS STR – that restrict the application of string matching to pronouns, proper names, and nonpronominal NPs, respectively. VI.6 Inductive Algorithms for IE 119 Overall, the learning framework and linguistic knowledge source modifications boost performance of Soon’s learning-based coreference resolution approach from an F-measure of 62.6 to 70.4 on MUC-6 and from 60.4 to 63.4 on MUC-7. VI.6 INDUCTIVE ALGORITHMS FOR IE Rule Induction algorithms produce symbolic IE rules based on a corpus of annotated documents. VI.6.1 WHISK WHISK is a supervised learning algorithm that uses hand-tagged examples for learn-ing information extraction rules. WHISK learns regular expressions for each of the fields it is trying to extract. The algorithm enables the integration of user-defined semantic classes. Such classes enable the system to adjust to the specific jargon of a given domain. As an example, consider the domain of apartment rental ads. We want to accommodate all types of spellings of bedroom, and hence we introduce the following semantic class: Bdrm = (brs |br |bds | bdrm | bd| bedroom| bed). WHISK learns the regular expressions by using an example-covering algorithm that tries to cover as many positive examples while not covering any negative example. The algorithm begins learning a single rule by starting with an empty rule; then we add one term at a time until either no negative examples are covered by the rule or the prepruning criterion has been satisfied. Each time we add the term that mini-mizes the Laplacian, which is (e + 1)/(n + 1), where e is the number of negative examples covered by the rule as a result of the addition of the term and n is the number of positive examples covered by the rule as a result of the term addition. The process of adding rules repeats until the set of learned rules covers all the pos-itive training instances. Finally, postpruning removes some of the rules to prevent overfitting. Here is an example of a WHISK rule: ID::1 Input:: (Digit) ‘BR’ ‘$’ (number) Output:: Rental {Bedrooms $1} {Price $2} For instance, from the text “3 BR, upper flr of turn of ctry. Incl gar, grt N. Hill loc 995$. (206)-999-9999,” the rule would extract the frame Bedrooms – 3, Price – 995. The “” char in the pattern will match any number of characters (unlimited jump). Patterns enclosed in parentheses become numbered elements in the output pattern, and hence (Digit) is $1 and (number) is $2. VI.6.2 BWI The BWI (boosted wrapper induction) is a system that utilizes wrapper induction techniques for traditional Information Extraction. IE is treated as a classification problem that entails trying to approximate two boundary functions Xbegin(i) and Xend(i). Xbegin(i) is equal to 1 if the ith token starts a field that is part of the frame to be extracted and 0 otherwise. Xend(i) is defined in a similar way for tokens that 120 Information Extraction end a field. The learning algorithm approximates each X function by taking a set of pairs of the form {i, X}(i) as training data. Each field is extracted by a wrapper W = <F, A, H> where F is a set of begin boundary detectors A is a set of end boundary detectors H(k) is the probability that the field has length k A boundary detector is just a sequence of tokens with wild cards (some kind of a regular expression). W(i, j) = 1 if F(i)A( j)H( j −i + 1) > σ 0 otherwise F(i) = k CFkFk(i), A(i) = k CAk Ak(i). W(i, j) is a nave Bayesian approximation of the probability that we have a field between token i and j with uniform priors. Clearly, as σ is set to be higher we get better precision and lower recall, and if we set σ to be 0 we get the highest recall but compromise precision. The BWI algorithm learns two detectors by using a greedy algorithm that extends the prefix and suffix patterns while there is an improvement in the accuracy. The sets F(i) and A(i) are generated from the detectors by using the AdaBoost algorithm. The detector pattern can include specific words and regular expressions that work on a set of wildcards such as <num>, <Cap>, <LowerCase>, <Punctuation> and <Alpha>. When the BWI algorithm was evaluated on the acquisition relations from the Reuters news collection, it achieved the following results compared with HMM: Slot BWI HMM Acquiring Company 34.1% 30.9% Dollar Amount 50.9% 55.5% VI.6.3 The (LP)2 Algorithm The (LP)2 algorithm learns from an annotated corpus and induces two sets of rules: tagging rules generated by a bottom-up generalization process and correction rules that correct mistakes and omissions done by the tagging rules. A tagging rule is a pattern that contains conditions on words preceding the place where a tag is to be inserted and conditions on the words that follow the tag. Conditions can be either words, lemmas, lexical categories (such as digit, noun, verb, etc), case (lower or upper), and semantic categories (such as time-id, cities, etc). VI.6 Inductive Algorithms for IE 121 The (LP)2 algorithm is a covering algorithm that tries to cover all training exam-ples. The initial tagging rules are generalized by dropping conditions. A sample rule for tagging the stime (start time of a seminar) is shown below. Condition Word Index word lemma LexCat Case SemCat Tag Inserted 3 at <stime> 4 digit 5 time-id The correction rules take care of incorrect boundaries set for the tags and shift them to fix the errors. An example is “at <stime> 4 </stime> pm,” where the </stime> tag should be shifted one token to the right. The correction rules learn from the mistakes of the tagging processing on the training corpus. The action taken by a correction rule is just to shift the tag rather than introduce a new tag. The same covering algorithm used for learning the tagging rules is used for learning the correction rules. (LP)2 was also tested on extracting information from financial news articles and managed to obtain the following results: Tag F1 Tag F1 Location 70% Organization 86% Currency 85% Stock Name 85% Stock Exchange Name 91% Stock Category 86% Stock Exchange Index 97% Stock Type 92% These results are not on par with the results achieved by the probabilistic extrac-tion algorithms such as HMM, CRF, and MEMM. It seems that the inductive algo-rithms are suitable for semistructured domains, where the rules are fairly simple, whereas when dealing with free text documents (such as news articles) the proba-bilistic algorithms perform much better. VI.6.4 Experimental Evaluation All four algorithms were evaluated on the CMU seminar announcement database and achieved the following results (F1results): Slot BWI HMM (LP)2 WHISK Speaker 67.7% 76.6% 77.6 18.3% Location 76.7% 78.6% 75.0% 66.4% Start Time 99.6% 98.5% 99.0% 92.6% End Time 93.9% 62.1% 95.5% 86% 122 Information Extraction VI.7 STRUCTURAL IE VI.7.1 Introduction to Structural IE Most text mining systems simplify the structure of the documents they process by ignoring much of the structural or visual characteristics of the text (e.g., font type, size, location, etc.) and process the text either as a linear sequence or as a bag of words. This allows the algorithms to focus on the semantic aspects of the document. However, valuable information is lost in these approaches, which ignore information contained in the visual elements of a document. Consider, for example, an article in a journal. The title is readily identifiable based on its special font and location but less so based on its semantic content alone, which may be similar to the section headings. This holds true in the same way for the author names, section headings, running title,and so on. Thus, much important information is provided by the visual layout of the document – information that is ignored by most text mining and other document analysis systems. One can, however, leverage preprocessing techniques that do not focus on the semantic content of the text but instead on the visual layout alone in an effort to extract the information contained in layout elements. These type of techniques entail an IE task in which one is provided a document and seeks to discover specific fields of the document (e.g., the title, author names, publication date, figure captions, bibli-ographical citings, etc.). Such techniques have been termed structural or visual infor-mation extraction. Of course, it goes without saying that, within the overall context of text mining preprocessing, a structural or visual IE approach is not aimed at replacing the seman-tic one. Instead, the structural IE approach can be used to complement other more conventional text mining preprocessing processes. This section describes a recently developed general algorithm that allows the IE task to be performed based on the visual layout of the document. The algorithm employs a machine learning approach whereby the system is first provided with a set of training documents in which the desired fields are manually tagged. On the basis of these training examples, the system automatically learns how to find the corresponding fields in future documents. VI.7.2 Overall Problem Definition A document D is a set of primitive elements D = {e1, . . . , en}. A primitive element can be a character, a line, or any other visual object as determined by the document format. A primitive element can have any number of visual attributes such as font size and type, physical location, and so on. The bounding box attribute, which provides the size and location of the bounding box of the element, is assumed to be available for all primitive elements. We define an object in the document to be any set of primitive elements. The visual information extraction (VIE) task is as follows. We are provided with a set of target fields F = {f1, . . . , fk} to be extracted and a set of training documents T = {T1, . . . , Tm} wherein all occurrences of the target fields are annotated. Specifi-cally, for each target field f and training document T, we are provided with the object VI.7 Structural IE 123 f(T) of T that is of type f (f(T) = 0 if f does not appear in T). The goal, when pre-sented with an unannotated query document Q, is to annotate the occurrences of target fields that exist in Q (not all target fields need be present in each document). Practically, the VIE task can be decomposed into two subtasks. First, for each document (both training and query) one must group the primitive elements into meaningful objects (e.g., lines, paragraphs, etc.) and establish the hierarchical struc-ture among these objects. Then, in the second stage, the structure of the query doc-ument is compared with the structures of the training documents to find the objects corresponding to the target fields. It has also proven possible to enhance the results by introducing the notion of templates, which are groups of training documents with a similar layout (e.g., articles from the same journal). Using templates, one can identify the essential features of the page layout, ignoring particularities of any specific document. Templates are discussed in detail in the sections that follow. A brief examination is also made of a real-world system that was implemented for a typical VIE task involving a set of documents containing financial analyst reports. The documents were in PDF format. Target fields included the title, authors, publi-cation dates, and others. VI.7.3 The Visual Elements Perceptual Grouping Subtask Recall that a document is a set of primitive elements such as characters, figures, and so on. The objects of a document are sets of primitive elements. Target fields, in general, are objects. Thus, the first step in the visual IE task is to group the primitive elements of the documents into higher level objects. The grouping should provide the con-ceptually meaningful objects of the document such as paragraphs, headings, and footnotes. For humans, the grouping process is easy and is generally performed uncon-sciously based on the visual structure of the document. As with other types of IE perceptual grouping requirements, the goal is to mimic the human perceptual group-ing process. VI.7.4 Problem Formulation for the Perceptual Grouping Subtask One can model the structure of the objects of a document as a tree, of which the leaves are primitive elements and the internal nodes are (composite) objects. This structure is called the object tree or O-Tree of the document. The O-Tree structure creates a hierarchal structure among objects in which higher level objects consist of groups of lower level objects. This hierarchal structure reflects the conceptual structure of documents in which objects such as columns are groups of paragraphs, which, in turn, are groups of lines, and so on. The exact levels and objects represented in the O-Tree are application and format dependent. For an HTML document, for example, the O-Tree may include objects represent-ing tables, menus, the text body, and other elements, whereas for PDF documents the O-Tree may include objects representing paragraphs, columns, lines, and so on. Accordingly, for each file format and application we define the object hierarchy, H, 124 Information Extraction whichdeterminesthesetofpossibleobjecttypes,andahierarchyamongtheseobjects. Any object hierarchy must contain an object of type document, which is at the root of the hierarchy. When constructing an O-Tree for a document, each object is labeled by one of the object types defined in the object hierarchy, and the tree structure must correspond to the hierarchical structure defined in the hierarchy. Formally, an object hierarchy H is a rooted DAG that satisfies the following: ■The leaf nodes are labeled by primitive element types. ■Internal nodes are labeled by objects types. ■The root node is labeled by the document object type. ■For object types x and y, type y is a child of x if an object of type x can (directly) contain an object type y. For a document D = {e1, . . . , en} and an object hierarchy H, an O-Tree of D according to H is a tree O with the following characteristics: ■The leaves of O consist of all primitive elements of D. ■Internal nodes of O are objects of D. ■If X and X′ are nodes of O (objects or primitive elements) and X ⊂X′, then X′ is an ancestor (or parent) of X. ■Each node X is labeled by a label from H denoted label(X). ■If X′ is a parent of X in T, then label(X′) is a parent of label(X) in H. ■label(root) = Document. VI.7.5 Algorithm for Constructing a Document O-Tree Given a document, one constructs an O-Tree for the document. In doing so, the aim is to construct objects best reflecting the true grouping of the elements into “meaningful” objects (e.g., paragraphs, columns, etc.). When constructing an object we see to it that the following requirements are met: ■The elements of the objects are within the same physical area of the page. Specif-ically, each object must be connected – that is, an object X cannot be decomposed into two separate objects X1 and X2 such that any line connecting X1 and X2 necessarily crosses an element in a different object. ■The elements of the object have similar characteristics (e.g., similar font type, similar font size, etc.). Specifically, one must assume a fitness function fit(.,.) such that for any two objects X and Y, where label(Y) is child of label(X), fit(Y,X) provides a measure of how fit Y is as an additional member to X (e.g., if X is a paragraph and Y a line, then how similar is Y the other lines in X). One adds Y to X only if fit(Y;X) is above some threshold value, ◦. The exact nature of the function fit(¢; ¢) and the threshold value are format and domain dependent. Given these two criteria, the O-Tree can be constructed in a greedy fashion, from the bottom up, layer by layer. In doing so, one should always prefer to enlarge existing objects of the layer, starting with the largest object. If no existing object can be enlarged, and there are still “free” objects of the previous layer, a new object is created. The procedure terminates when the root object, labeled Document, is VI.7 Structural IE 125 completed. A description of the algorithm is provided in the following pseudocode algorithm: Input: D - Document Output: O-Tree for D 1. For each type t ∈H do let level(t) be the length of the longest path from t to a leaf 2. Let h = level(Document) 3. Objects(0) ←D 4. For i = 1 to h do 5. Objects(i) ←0 6. free ←Objects(i −1) 7. While free = / 0 do 8. For each X ∈Objects(i) in order of descending size do 9. For each Y ∈free in order of increasing distance from X do 10. If Y is a neighbor of X and fit(Y, X) ≥γ then 11. X ←X ∪Y 12. make Y a child of X 13. Remove Y from free 14. Break (go to line 7) 15. For each t ∈H such that level(t) = i do 16. if Objects(i) does not include an empty object of type t 17. Add an empty set of type t to Objects(i) 18. end while 19. Remove empty objects from Objects(i) 20. end for 21. return resulting O-Tree VI.7.6 Structural Mapping Given a Visual Information Extraction task, one first constructs an O-Tree for each of the training documents as well as for the query document, as described in the previous section. Once all the documents have been structured as O-Trees, it is necessary to find the objects of Q (the query document) that correspond to the target fields. This is done by comparing the O-Tree of Q, and the objects therein, to those of the training documents. This comparison is performed in two stages. First, the training document that is visually most similar to the query document is found. Then, one maps between the objects of the two documents to discover the targets fields in the query document. VI.7.6.1 Basic Algorithm ■Document Similarity. Consider a query document Q and training documents T = {T1, . . . , Tn}. We seek to find the training document Topt that is visually 126 Information Extraction most similar to the query document. We do so by comparing the O-Trees of the documents. In the comparison we only concentrate on similarities between the top levels of the O-Tree. The reason is that even similar documents may still differ in the details. Let O(Q) and O(T1), . . . , O(Tn) be the O-Trees, the query document and the training documents, respectively, and let H be the type hierarchy. We define a subgraph of H, which we call the Signature Hierarchy, consisting of the types in H that determine features of the global layout of the page (e.g., columns, tables). The types that are included in the signature is implementation dependent, but generally the signature would include the top one or two levels of the type hierarchy. For determining the similarity between the objects we assume the existence of a similarity function sim(X,Y), which provides a measure of similarity between objects of the same type based on object characteristics such as size, location, and fonts (sim(X,Y) is implementation dependent). Given a query document Q and a training document T, for each object X in the signature of T we find the object X0 of Q (of the same type as X) that is most similar to X. We then compute the average similarity for all objects in the signature of T to obtain the overall similarity score between the documents. We choose the document with the highest similarity score. A description of the procedure is provided below. Input: Q, T1. . . , T n., and their respective O-Trees Output: Topt (training document most similar to query document) 1 for i = 1 to n do 1. total ←0 2. count ←0 3. For each t ∈S do 4. For each X ∈O(Ti) of type t do 5. s(X) ←max {sim(X,X’) | X’ ∈O(Q), X’ of type t} 6. total ←total + s(X) 7. count ←count + 1 8. score (i) ←total = count 9. end for 10. opt ←argmax{score (i) g} 11. return Topt ■Finding the Target Fields. Once the most similar training document Topt has been determined, it remains to find the objects of Q that correspond to the target fields, as annotated in the document Topt. One does so by finding, for each target field f, the object within O(Q) that is most similar to f(Topt) (the object in O(Topt) annotated as f). Finding this object is done in an exhaustive manner, going over all objects of O(Q). One also makes sure that the similarity between this object and the corresponding object of Topt is beyond a certain threshold α, or else one decides that the field f has not been found in Q (either because it is not there or we have failed to find it). VI.7 Structural IE 127 A description of the procedure for finding the target fields is provided in the algorithm below. Note that the annotation of the training documents is performed before (and independent) of the construction of the O-Trees. Thus, annotated objects need not appear in the O-Tree. If this is the case, line 2 sees to it that one takes the minimal object of the O(Topt) that fully contains the annotated object. Input: ■Q, Topt (and their respective O-Trees) ■{ f(Topt) | f ∈F }(target fields in Topt) Output: { f (Q) | f ∈F }(Target fields in Q) 1 For each f ∈F do 1. Let f (Topt) ∈O(Topt) be minimal such that f(Topt) ⊆−f(Topt) 2. f(Q) ←argmax{sim(f (Topt), X) | X ∈O(Q);X of type t} 3. if sim(−f(Topt), f(Q)) < α then f(Q) ←0 VI.7.7 Templates The preceding algorithm is based on finding the single most similar document to the query document and then extracting all the target fields based on this document alone. Although this provides good results in most cases, there is the danger that particularities of any single document may reduce the effectiveness of the algorithm. To overcome this problem, the notion of templates has been introduced; these templates permit comparison of the query document with a collection of similar documents. A template is a set of training documents that have the same general visual layout (e.g., articles from the same journal, Web pages from the same site, etc.). The documents in each template may be different in details but share the same overall structure. Using templates, one finds the template most similar to the query document (ratherthanthedocumentmostsimilar).Thisisaccomplishedby–foreachtemplate– averaging the similarity scores between the query document and all documents in the template. One then picks the template with the highest average similarity. Once the most similar template is determined, the target fields are provided by finding the object of Q most similar to a target field in any of the documents in the template. A description of the process by which one can find target fields through the use of templates is provided below. Input: ■Q and its O-Tree ■Topt = {T1,..., Tk.}(most similar template) and respective O-Trees ■{f(T) | f ∈F,T ∈Topt}(target fields in Topt) Output: {f(Q) | f ∈F }(Target fields in Q) 1 For each f ∈F do 1. For each T ∈Topt do 128 Information Extraction 2. Let −f(T) ∈O(T) be minimal such that f(T) ⊆−f(T) 3. XT ←(argmax{sim(−f(T), X) | X ∈O(Q), X of type t} 4. s(T) ←sim(−f(T), XT) 5. if max{s(T) | T ∈Topt} ≥α then 6. f(Q) ←(argmax{s(XT) | T ∈Topt } 7. else f(Q) ←0 VI.7.8 Experimental Results A system for Visual Information Extraction, as described above, was recently imple-mented on documents that are analyst reports from several leading investment banks. The training data consisted of 130 analyst reports from leading investment banks: 38 from Bear Stearns, 14 from CSFB, 15 from Dresdner, and 63 from Morgan Stanley. All the documents were in PDF format. The training documents were divided into a total of 30 templates: 7 from the Bear Stearns data, 4 from the CSFB data, 5 from the Dresdner data, and 14 in the Morgan Stanley data. All the training documents were manually annotated for the target fields. The target fields included the fol-lowing fields: Author, Title, Subtitle, Company, Ticker, Exchange, Date, Geography, Industry Info. Not all documents included all target fields, but within each template documents had the same target fields. The system was tested on 255 query documents: 33 from Bear Stearns, 12 from CSFB, 14 from Dresdner, and 196 from Morgan Stanley. With regard to the imple-mentation, the type hierarchy (H) used in the system is provided in Figure VI.7. The signature hierarchy contained the objects of type column and paragraph. The imple-mentation of the fitness function fit(.,.) (for the fitness of one object within the other) takes into account the distance between the objects and the similarity of font type. Document Image Graphic line Column Paragraph Text line Text Figure VI.7. The type hierarchy. VI.8 Further Reading 129 Precision Recall 80% 85% 90% 95% Basic Templates Figure VI.8. Overall recall and precision rates for basic algorithm and for algorithm using templates. For the fitness of a line within an existing paragraph, it also takes into account the distance between lines. The similarity function sim(.,.), which measures the similarity between objects in different documents, is primarily based on similarity between the sizes and locations of the respective objects. With an emphasis on the results, the performance of the system was measured with the basic algorithm and the use of templates. The overall average recall and precision values are given in Figure VI.8. On the whole the introduction of templates improved the performance of the algorithm, increasing the average accuracy from 83 to 90 percent. Note that for both algorithms the recall and precision values are essentially identical. The reason for this is that, for any target field f, on the one hand, each document contains only one object of type f, and, on the other hand, the algorithm marks one object as being of type f. Thus, for every recall error there is a corresponding precision error. The slight difference that does exist between the recall and precision marks is due to the cases in which the algorithm decided not to mark any element, signifying a recall error but not a precision error. Some target fields are harder to detect than others. It is interesting to note that, although the introduction of templates improves accuracy in most cases, there are some target fields for which it reduces accuracy. Understanding the exact reasons for this and how to overcome such problems is a topic for further research. VI.8 FURTHER READING Section VI.1 For a full list of definitions related to information extraction, see < nist.gov/iaui/894.02/related˙projects/muc/info/definitions.html>. Section VI.4 Descriptions of rule-based information extraction systems can be found in Hobbs et al. (1991); Appelt, Hobbs, Bear, Israel, and Tyson (1993); Grishman (1996); Freitag (1997); Grishman (1997); Wilks (1997); and Ciravegna et al. (1999). 130 Information Extraction Section VI.5 Algorithms on anaphora resolutions can be found in Rich and LuperFoy (1988); Lappin and Leass (1994); McCarthy and Lehnert (1995); Humphreys, Gaizauskas, and Azzam (1997); Kehler (1997); Kennedy and Boguraev (1997); Barbu and Mitkov (2001); Klebanov and Wiemer-Hastings (2002); Ng and Cardie (2002); and Ng and Cardie (2003). Discussions about evaluation of anaphora resolution algorithms can be found in Aone and Bennett (1995) and Azzam, Humphreys, and Gaizauskas (1998). Section VI.6 More details about the whisk algorithm can be found in Soderland (1999). The description of the BWI algorithm can be found in Freitag and Kushmerick (2000). The (LP)2 algorithm is described in Ciravegna (2001). VII Probabilistic Models for Information Extraction Severalcommonthemesfrequentlyrecurinmanytasksrelatedtoprocessingandana-lyzing complex phenomena, including natural language texts. Among these themes are classification schemes, clustering, probabilistic models, and rule-based systems. This section describes some of these techniques generally, and the next section applies them to the tasks described in Chapter VI. Research has demonstrated that it is extremely fruitful to model the behavior of complex systems as some form of a random process. Probabilistic models often show better accuracy and robustness against the noise than categorical models. The ultimate reason for this is not quite clear and is an excellent subject for a philosophical debate. Nevertheless, several probabilistic models have turned out to be especially useful for the different tasks in extracting meaning from natural language texts. Most promi-nent among these probabilistic approaches are hidden Markov models (HMMs), stochastic context-free grammars (SCFG), and maximal entropy (ME). VII.1 HIDDEN MARKOV MODELS An HMM is a finite-state automaton with stochastic state transitions and symbol emissions (Rabiner 1990). The automaton models a probabilistic generative process. In this process, a sequence of symbols is produced by starting in an initial state, emitting a symbol selected by the state, making a transition to a new state, emitting a symbol selected by the state, and repeating this transition–emission cycle until a designated final state is reached. Formally, let O = {o1, . . . oM} be the finite set of observation symbols and Q = {q1, . . . qN} be the finite set of states. A first-order Markov model λ is a triple (π, A, B), where π : Q →[0, 1] defines the starting probabilities, A : Q × Q →[0, 1] defines the transition probabilities, and B : Q × O →[0, 1] denotes the emission probabil-ities. Because the functions π, A, and B define true probabilities, they must satisfy q∈Q π(q) = 1, q′∈QA(q, q′) = 1 and o∈OB(q, o) = 1 for all states q. 131 132 Probabilistic Models for Information Extraction A model λ together with the random process described above induces a proba-bility distribution over the set O of all possible observation sequences. VII.1.1 The Three Classic Problems Related to HMMs Most applications of hidden Markov models can be reduced to three basic problems: 1. Find P(T | λ) – the probability of a given observation sequence T in a given model λ. 2. Find argmaxS∈Q |T| P(T, S | λ) – the most likely state trajectory given λ and T. 3. Find argmaxλ P(T, | λ) – the model that best accounts for a given sequence. The first problem allows us to compute the probability distribution induced by the model. The second finds the most probable states sequence for a given observation sequence. These two tasks are typically used for analyzing a given observation. The third problem, on the other hand, adjusts the model itself to maximize the likelihood of the given observation. It can be viewed as an HMM training problem. We now describe how each of these three problems can be solved. We will start by calculating P(T | λ), where T is a sequence of observation symbols T = t1t2 . . . tk ∈ O∗. The most obvious way to do that would be to enumerate every possible state sequence of length |T |. Let S = s1s2 . . . s|T| ∈Q|T| be one such sequence. Then we can calculate the probability P(T | S, λ) of generating T knowing that the process went through the states sequence S. By Markovian assumption, the emission probabilities are all independent of each other. Therefore, P(T | S, λ) = πi=1..|T|B(si, ti). Similarly, the transition probabilities are independent. Thus the probability P(S|λ) for the process to go through the state sequence S is P(S | λ) = π(s1) · πi=1..|T|−1A(si, si+1). Using the above probabilities, we find that the probability P(T|λ) of generating the sequence can be calculated as P(T | λ) = |T| S∈QP(T | S, λ) · P(S | λ). This solution is of course infeasible in practice because of the exponential number of possible state sequences. To solve the problem efficiently, we use a dynamical programming technique. The resulting algorithm is called the forward–backward procedure. VII.1.2 The Forward–Backward Procedure Let αm(q), the forward variable, denote the probability of generating the initial seg-ment t1t2 . . . tm of the sequence T and finishing at the state q at time m. This forward variable can be computed recursively as follows: 1. α1(q) = π(q) · B(q, t1), 2. αn+1(q) = q′∈Q αn(q′) · A(q′, q) · B(q, tn+1). VII.1 Hidden Markov Models 133 Then, the probability of the whole sequence T can be calculated as P(T | λ) = q∈Qα|T|(q). In a similar manner, one can define βm (q), the backward variable, which denotes the probability of starting at the state q and generates the final segment tm+1 . . . t|T| of the sequence T. The backward variable can be calculated starting from the end and going backward to the beginning of the sequence: 1. β|T|(q) = 1, 2. βn−1(q) = q′∈QA(q, q′) · B(q′, tn) · βn(q′). The probability of the whole sequence is then P(T | λ) = q∈Q π(q) · B(q, t1) · β1(q). VII.1.3 The Viterbi Algorithm We now proceed to the solution of the second problem – finding the most likely state sequence for a given sequence T. As with the previous problem, enumerating all possible state sequences S and choosing the one maximizing P(T, S | λ) is infeasible. Instead, we again use dynamical programming, utilizing the following property of the optimal states sequence: if T ′ is some initial segment of the sequence T = t1t2 . . . t|T| and S = s1s2 . . . s|T| is a state sequence maximizing P(T, S | λ), then S′ = s1s2 . . . s|T ′| maximizes P(T ′, S′ | λ) among all state sequences of length |T ′| ending with s|T|. The resulting algorithm is called the Viterbi algorithm. Let γ n(q) denote the state sequence ending with the state q, which is optimal for the initial segment Tn = t1t2 . . . tn among all sequences ending with q, and let δn(q) denote the probability P(Tn, γ n(q) | λ) of generating this initial segment following those optimal states. Delta and gamma can be recursively calculated as follows: 1. 1.δ1(q) = π(q) · B(q, t1), γ1(q) = q, 2. δn+1(q) = maxq′∈Q δn(q′) · A(q′, q) · B(q, tn+1), γn+1(q) = γ1(q′)q, where q′ = argmaxq′∈Qδn(q′) · A(q′, q) · B(q, tn+1). Then, the best states sequence among {γ |T|(q) : q ∈Q} is the optimal one: argmaxS∈Q |T|P(T, S | λ) = γ|T|(argmaxq∈Qδ|T|(q)). Example of the Viterbi Computation Using the HMM described in Figure VII.1 with the sequence (a, b, a), one would take the following steps in using the Viterbi algorithm: πi =  0.5 0 0.5  , Ai j = ⎛ ⎜ ⎝ 0.1 0.4 0.4 0.4 0.1 0.5 0.4 0.5 0.1 ⎞ ⎟ ⎠, B i(a) =  0.5 0.8 0.2  , B i(b) =  0.5 0.2 0.8  First Step (a): ■δ1(S 1) = π(S 1) · B(S 1, a) = 0.5 · 0.5 = 0.25 134 Probabilistic Models for Information Extraction S1 S2 S3 0.1 0.1 0.1 0.5 0.4 0.4 0.5 0.5 0.2 0.8 0.8 0.2 a b a b a b 0.5 0.5 Figure VII.1. A sample HMM. ■δ1(S 2) = π(S 2) · B(S 2, a) = 0 ■δ1(S 3) = π(S 3) · B(S 3, a) = 0.5 · 0.2 = 0.1 Second Step (b): ■δ2(S1) = maxq′∈Qδ1(q′) · A(q′, S1) · B(S1, b) = max(δ1(S1) · A(S1, S1) · B(S1, b), δ1(S2) · A(S2, S1) · B(S1, b), δ1(S3) · A(S3, S1) · B(S1, b)) = max(0.25 · 0.1 · 0.5, 0, 0.1 · 0.4 · 0.5) = max(0.0125, 0, 0.02) = 0.02 ■γ2(S1) = S3 In a similar way, we continue to calculate the other δ and γ factors. Upon reaching t3 we can see that S1 and S3 have the highest probabilities; hence, we trace back our steps from both states using the γ variables. We have in this case two optimal paths: {S1, S3, S1} and {S3, S2, S3}. The diagram of the computation of the Viterbi Algorithm is shown in Figure VII.2. Note that, unlike the forward–backward algorithm described in Section VII.1.2 the Viterbi algorithm does not use summation of probabilities. Only multiplica-tions are involved. This is convenient because it allows the use of logarithms of probabilities instead of the probabilities themselves and to use summation instead of multiplication. This can be important because, for large sequences, the proba-bilities soon become infinitesimal and leave the range of the usual floating-point numbers. VII.1 Hidden Markov Models 135 state Time slice t -> S1 S2 S3 0.25 0.02 0.016 0 0.04 0.008 0.1 0.08 0.016 t2 t3 a a b t1 Figure VII.2. Computation of the optimal path using the Viterbi algorithm. VII.1.4 The Training of the HMM The most difficult problem of the three involves training the HMM. In this section, only the problem of estimating the parameters of HMM is covered, leaving the topology of the finite-state automaton fixed. The training algorithm is given some initial HMM and adjusts it so as to maximize the probability of the training sequence. However, the set of states is given in advance, and the transition and emission probabilities, which are initially zero, remain zero. The adjustment formulas are called Baum–Welsh reestimation formulas. Let µn(q) be the probability P(sn = q | T, λ) of being in the state q at time n while generating the observation sequence T. Then µn(q) · P(T | λ) is the probability of generating T passing through the state q at time n. By definition of the forward and backward variables presented in Section VII.1.2, this probability is equal to αn(q) · βn(q). Thus, µn(q) = αn(q) · βn(q) / P(T | λ). Alsoletϕn(q,q ′)betheprobability P(sn = q, sn+1 = q ′ | T, λ)ofpassingfromstateq to state q ′ at time n while generating the observation sequence T. As in the preceding equation, ϕn(q, q ′) = αn(q) · A(q, q ′) · B(q ′, on+1) · βn(q)/P(T | λ). The sum of µn(q) over all n = 1 . . . | T | can be seen as the expected number of times the state q was visited while generating the sequence T. Or, if one sums over n = 1 . . . | T | −1, the expected number of transitions out of the state q results because there is no transition at time |T|. Similarly, the sum of ϕn(q, q′) over all n = 1 . . . | T | −1 can be interpreted as the expected number of transitions from the state q to q ′. 136 Probabilistic Models for Information Extraction The Baum–Welsh formulas reestimate the parameters of the model λ according to the expectations π′(q) : = µ1(q), A′(q, q ′) : = n=1..|T|−1ϕn(q, q ′)/ n=1..|T|−1µn(q), B ′(q, o) : = n:Tn=oµn(q)/ n=1..|T|µn(q). It can be shown that the model λ′ = (π′, A′, B ′) is equal either to λ, in which case the λ is the critical point of the likelihood function P(T | λ), or λ′, which better accounts for the training sequence T than the original model λ in the sense that P(T | λ′) > P(T | λ). Therefore, the training problem can be solved by iteratively applying the reestimation formulas until convergence. VII.1.5 Dealing with Training Data Sparseness It is often the case that the amount of training data – the length of the training sequence T – is insufficient for robust estimation of parameters of a complex HMM. In such cases, there is often a trade-off between constructing complex models with many states and constructing simple models with only a few states. The complex model is better able to represent the intricate structure of the task but often results in a poor estimation of parameters. The simpler model, on the other hand, yields a robust parameter estimation but performs poorly because it is not sufficiently expressive to model the data. Smoothing and shrinkage (Freitag and McCallum 1999) are the techniques typi-cally used to take the sting out of data sparseness problems in probabilistic modeling. This section describes the techniques with regard to HMM, although they apply in other contexts as well such as SCFG. Smoothing is the process of flattening a probability distribution implied by a model so that all reasonable sequences can occur with some probability. This often involves broadening the distribution by redistributing weight from high-probability regions to zero-probability regions. Note that smoothing may change the topology of an HMM by making some initially zero probability nonzero. The simplest possible smoothing method is just to pretend that every possible training event occurrs one time more than it actually does. Any constant can be used instead of “one.” This method is called Laplace smoothing. Other possible methods may include back-off smoothing, deleted interpolation, and others.1 Shrinkage is defined in terms of some hierarchy representing the expected similarity between parameter estimates. With respect to HMMs, the hierarchy can be defined as a tree with the HMM states for the leaves – all at the same depth. This hierarchy is created as follows. First, the most complex HMM is built and its states are used for the leaves of the tree. Then the states are separated into disjoint classes within which the states are expected to have similar probability distributions. The classes become the parents of their constituent states in the hierarchy. Note that the HMM structure at the leaves induces a simpler HMM structure at the level of 1 Full details outlining the smoothing technique can be found in Manning and Schutze (1999). VII.2 Stochastic Context-Free Grammars 137 the classes. It is generated by summing the probabilities of emissions and transitions of all states in a class. This process may be repeated until only a single-state HMM remains at the root of the hierarchy. Training such a hierarchy is straightforward. The emission and transition proba-bilities of the states in the internal levels of the hierarchy are calculated by summing the corresponding probabilities of their descendant leaves. Modeling using the hier-archy is also simple. The topology of the most complex HMM is used. However, the transition and emission probabilities of a given state are calculated by linearly interpolating between the corresponding probabilities for all ancestors of the state in the shrinkage hierarchy. The weights of the different models in the interpolation can be fixed at some reasonable value, like 1 /2, or can be optimized using held-out training data. VII.2 STOCHASTIC CONTEXT-FREE GRAMMARS An SCFG is a quintuple G = (T, N, S, R, P), where T is the alphabet of terminal symbols (tokens), N is the set of nonterminals, S is the starting nonterminal, R is the set of rules, and P : R →[0.1] defines their probabilities. The rules have the form n →s1s2 . . . sk, where n is a nonterminal and each si is either a token or another nonterminal. As can be seen, SCFG is a usual context-free grammar with the addition of the P function. As is true for a canonical (nonstochastic) grammar, SCFG is said to generate (or accept) a given string (sequence of tokens) if the string can be produced starting from a sequence containing just the starting symbol S and expanding nonterminals one by one in the sequence using the rules from the grammar. The particular way the string was generated can be naturally represented by a parse tree with the starting symbol as a root, nonterminals as internal nodes, and the tokens as leaves. The semantics of the probability function P are straightforward. If r is the rule n →s1s2 . . . sk, then P(r) is the frequency of expanding n using this rule, or, in Bayesian terms, if it is known that a given sequence of tokens was generated by expanding n, then P(r) is the a priori likelihood that n was expanded using the rule r. Thus, it follows that for every nonterminal n the sum P(r) of probabilities of all rules r headed by n must be equal to one. VII.2.1 Using SCFGs Usually, some of the nonterminal symbols of a grammar correspond to meaning-ful language concepts, and the rules define the allowed syntactic relations between these concepts. For instance, in a parsing problem, the nonterminals may include S, NP, VP, and others, and the rules would define the syntax of the language. For example, S →NP VP. Then, when the grammar is built, it is used for parsing new sentences. In general, grammars are ambiguous in the sense that a given string can be gener-ated in many different ways. With nonstochastic grammars there is no way to compare different parse trees, and thus the only information we can gather for a given sentence 138 Probabilistic Models for Information Extraction is whether or not it is grammatical – that is whether it can be produced by any parse. With SCFG, different parses have different probabilities; therefore, it is possible to find the best one, resolving the ambiguity. In designing preprocessing systems around SCFGs, it has been found neither necessary nor desirable (for performance reasons) to perform a full syntactic parsing of all sentences in the document. Instead, a very basic “parsing” can be employed for the bulk of a text, but within the relevant parts the grammar is much more detailed. Thus, the extraction grammars can be said to define sublanguages for very specific domains. In the classical definition of SCFG it is assumed that the rules are all independent. In this case it is possible to find the (unconditional) probability of a given parse tree by simply multiplying the probabilities of all rules participating in it. Then the usual parsing problem is formulated as follows: Given a sequence of tokens (a string), find the most probable parse tree that could generate the string. A simple generalization of the Viterbi algorithm is able to solve this problem efficiently. In practical applications of SCFGs, it is rarely the case that the rules are truly independent. Then, the easiest way to cope with this problem while leaving most of the formalism intact is to let the probabilities P(r) be conditioned on the context where the rule is applied. If the conditioning context is chosen reasonably, the Viterbi algorithm still works correctly even for this more general problem. VII.3 MAXIMAL ENTROPY MODELING Consider a random process of an unknown nature that produces a single output value y, a member of a finite set Y of possible output values. The process of generating y may be influenced by some contextual information x – a member of the set X of possiblecontexts.Thetaskistoconstructastatisticalmodelthataccuratelyrepresents the behavior of the random process. Such a model is a method of estimating the conditional probability of generating y given the context x. Let P(x, y) be denoted as the unknown true joint probability distribution of the random process, and let p(y | x) be the model we are trying to build taken from the class ℘of all possible models. To build the model we are given a set of training samples generated by observing the random process for some time. The training data consist of a sequence of pairs (xi, yi) of different outputs produced in different contexts. In many interesting cases the set X is too large and underspecified to be used directly. For instance, X may be the set of all dots “.” in all possible English texts. For contrast, the Y may be extremely simple while remaining interesting. In the preceding case, the Y may contain just two outcomes: “SentenceEnd” and “NotSentenceEnd.” The target model p(y | x) would in this case solve the problem of finding sentence boundaries. In such cases it is impossible to use the context x directly to generate the output y. There are usually many regularities and correlations, however, that can be exploited. Different contexts are usually similar to each other in all manner of ways, and similar contexts tend to produce similar output distributions. VII.3 Maximal Entropy Modeling 139 To express such regularities and their statistics, one can use constraint func-tions and their expected values. A constraint function f : X × Y →R can be any real-valued function. In practice it is common to use binary-valued trigger functions of the form f (x, y) =  1, if C(x) and y = yi, 0, otherwise. Such a trigger function returns one for pair (x, y) if the context x satisfies the condition predicate C and the output value y is yi. A common short notation for such a trigger function is C →yi. For the example above, useful triggers are previous token is “Mr” →NotSentenceEnd, next token is capitalized →SentenceEnd. Given a constraint function f, we express its importance by requiring our target model to reproduce f ’s expected value faithfully in the true distribution: p( f ) = x,y p(x, y) f (x, y) = P( f ) = x,yP(x, y) f (x, y). In practice we cannot calculate the true expectation and must use an empirical expected value calculated by summing over the training samples: pE( f ) = i=1..N y∈Yp(y | xi) f (xi, y)/N = PE( f ) = i=1..N f (xi, yi)/N. The choice of feature functions is of course domain dependent. For now, let us assume the complete set of features F = { fk} is given. One can express the complete-ness of the set of features by requiring that the model agree with all the expected value constraints pE( fk) = PE( fk) for all fk ∈F while otherwise being as uniform as possible. There are of course many models satis-fying the expected values constraints. However, the uniformity requirement defines the target model uniquely. The degree of uniformity of a model is expressed by its conditional entropy H(p) = − x,y p(x) · p(y | x) · log p(y | x). Or, empirically, HE(p) = − i=1..N y∈Yp(y | xi) · log p(y | xi)/N. The constrained optimization problem of finding the maximal-entropy target model is solved by application of Lagrange multipliers and the Kuhn–Tucker theo-rem. Let us introduce a parameter λk (the Lagrange multiplier) for every feature. Define the Lagrangian (p, λ) by (p, λ) ≡HE(p) + kλk(pE( fk) −PE( fk)). Holding λ fixed, we compute the unconstrained maximum of the Lagrangian over all p ∈℘. Denote by pλ the p where (p, λ) achieves its maximum and by (λ) the 140 Probabilistic Models for Information Extraction value of  at this point. The functions pλ and (λ) can be calculated using simple calculus: pλ(y | x) = 1 Zλ(x) exp  k λk fk(x, y)  , (λ) = − i=1..N log Zλ(x)/N +  k λkPE( fk), where Zλ(x) is a normalizing constant determined by the requirement that y∈Ypλ(y | x) = 1. Finally, we pose the dual optimization problem λ∗= argmaxλ(λ). The Kuhn–Tucker theorem asserts that, under certain conditions, which include our case, the solutions of the primal and dual optimization problems coincide. That is, the model p, which maximizes HE(p) while satisfying the constraints, has the parametric form pλ. It is interesting to note that the function (λ) is simply the log-likelihood of the training sample as predicted by the model pλ. Thus, the model pλ maximizes the likelihood of the training sample among all models of the parametric form pλ. VII.3.1 Computing the Parameters of the Model Thefunction(λ)iswellbehavedfromtheperspectiveofnumericaloptimization,for it is smooth and concave. Consequently, various methods can be used for calculating λ. Generalized iterative scaling is the algorithm specifically tailored for the problem. This algorithm is applicable whenever all constraint functions are non-negative: fk(x, y) ≥0. The algorithm starts with an arbitrary choice of λ’s – for instance λk = 0 for all k. At each iteration the λ’s are adjusted as follows: 1. For all k, let λk be the solution to the equation PE( fk) = i=1..N y∈Ypλ(y | xi) · fk(xi, y) · exp(λk f #(xi, y))/N, where f #(x, y) = k fk(x, y). 2. For all k, let λk := λk + λk. In the simplest case, when f # is constant, λk is simply (1/f #) · log PE( fk)/pλE( fk). Otherwise, any numerical algorithm for solving the equation can be used such as Newton’s method. VII.4 MAXIMAL ENTROPY MARKOV MODELS For many tasks the conditional models have advantages over generative models like HMM. Maximal entropy Markov models (McCallum, Freitag, and Pereira 2000), or MEMM, is one class of such a conditional model closest to the HMM. A MEMM is a probabilistic finite-state acceptor. Unlike HMM, which has sep-arate transition and emission probabilities, MEMM has only transition probabili-ties, which, however, depend on the observations. A slightly modified version of the VII.4 Maximal Entropy Markov Models 141 Viterbi algorithm solves the problem of finding the most likely state sequence for a given observation sequence. Formally, a MEMM consists of a set Q = {q1, . . . , qN} of states, and a set of transition probabilities functions Aq : X × Q →[0, 1], where X denotes the set of all possible observations. Aq(x, q′) gives the probability P(q′ | q, x) of transition from q to q′, given the observation x. Note that the model does not generate x but only conditions on it. Thus, the set X need not be small and need not even be fully defined. The transition probabilities Aq are separate exponential models trained using maximal entropy. The task of a trained MEMM is to produce the most probable sequence of states given the observation. This task is solved by a simple modification of the Viterbi algorithm. The forward–backward algorithm, however, loses its meaning because here it computes the probability of the observation being generated by any state sequence, which is always one. However, the forward and backward variables are still useful for the MEMM training. The forward variable [Ref->HMM] αm(q) denotes the probability of being in state q at time m given the observation. It is computed recursively as αn+1(q) = q′∈Qαn(q′) · Aq(x, q′). The backward variable β denotes the probability of starting from state q at time m given the observation. It is computed similarly as βn−1(q) = q′∈QAq(x, q′) · βn(q′). The model Aq for transition probabilities from a state is defined parametrically using constraint functions. If fk : X × Q →R is the set of such functions for a given state q, then the model Aq can be represented in the form Aq(x, q′) = Z(x, q)−1exp( kλk fk(x, q′)), where λk are the parameters to be trained and Z(x, q) is the normalizing factor making probabilities of all transitions from a state sum to one. VII.4.1 Training the MEMM If the true states sequence for the training data is known, the parameters of the models can be straightforwardly estimated using the GIS algorithm for training ME models. If the sequence is not known – for instance, if there are several states with the same label in a fully connected MEMM – the parameters must be estimated using a combination of the Baum–Welsh procedure and iterative scaling. Every iteration consists of two steps: 1. Using the forward–backward algorithm and the current transition functions to compute the state occupancies for all training sequences. 2. Computing the new transition functions using GIS with the feature frequencies based on the state occupancies computed in step 1. It is unnecessary to run GIS to convergence in step 2; a single GIS iteration is sufficient. 142 Probabilistic Models for Information Extraction VII.5 CONDITIONAL RANDOM FIELDS Conditional random fields (CRFs) (Lafferty, McCallum, et al. 2001) constitute another conditional model based on maximal entropy. Like MEMMs, which are described in the previous section, CRFs are able to accommodate many possibly correlated features of the observation. However, CRFs are better able to trade off decisions at different sequence positions. MEMMs were found to suffer from the so-called label bias problem. The problem appears when the MEMM contains states with different output degrees. Because the probabilities of transitions from any given state must sum to one, transitions from lower degree states receive higher probabilities than transitions from higher degree states. In the extreme case, transition from a state with degree one always gets probability one, effectively ignoring the observation. CRFs do not have this problem because they define a single ME-based distribu-tion over the whole label sequence. On the other hand, the CRFs cannot contain “hidden” states – the training data must define the sequence of states precisely. For most practical sequence labeling problems this limitation is not significant. In the description of CRFs presented here, attention is restricted to their sim-plest form – linear chain CRFs, which generalize finite-state models like HMMs and MEMMs. Such CRFs model the conditional probability distribution of sequences of labels given the observation sequences. More general formulations are possible (Lafferty et al. 2001; McCallum and Jensen 2003). Let X be a random variable over the observation sequences and Y a random variable over the label sequences. All components Yi of Y are assumed to range over a finite set L of labels. The labels roughly correspond to states in finite-state models. The variables X and Y are jointly distributed, but CRF constructs a conditional model p(Y | X) without explicitly modeling the margin p(X ). A CRF on (X, Y) is specified by a vector f = ( f1, f2, . . . fm) of local features and a corresponding weight vector λ = (λ1, λ2, . . . λm). Each local feature fj(x, y, i) is a real-valued function of the observation sequence x, the labels sequence y = (y1, y2, . . . yn),andthesequencepositioni.Thevalueofafeaturefunctionatanygiven position i may depend only on yi or on yi and yi+1 but not on any other components of the label sequence y. A feature that depends only on yi at any given position i is called a state feature, and if it depends on yi and yi+1 it is called a transition feature. The global feature vector F(x, y) is a sum of local features at all positions: F(x, y) = i=1..nf(x, y, i). The conditional probability distribution defined by the CRF is then pλ(y | x) = Zλ(x)−1exp(λ · F(x, y)), where Zλ(x) = y exp (λ · F(x, y)). It is a consequence of a fundamental theorem about random Markov fields (Kindermann and Snell 1980; Jain and Chellappa 1993) that any conditional distri-bution p(y/x) obeying the Markov property p(yi | x, {yj} j̸=i) = p(yi | x, yi−1, yi+1) VII.5 Conditional Random Fields 143 can be written in the exponential form above with a suitable choice of the feature functions and the weights vector. Notice also that any HMM can be represented in the form of CRF if its set of states Q coincide with the set of labels L. If A : L × L →[0, 1] denotes the transition probability and B : L × O →[0, 1] denotes is the emission probability functions, the corresponding CRF can be defined by the set of state features fyo(x, y, k) ≡(yk = y) and (xk = o) and transition features fyy′(x, y, k) ≡(yk = y) and (yk+1 = y′) with the weights λyo = log B(y, o) and λyy′ = log A(y, y′). VII.5.1 The Three Classic Problems Relating to CRF As with HMMs, three main problems are associated with CRFs: 1. Given a CRF λ, an observation sequence x, and a label sequence y, find the conditional probability pλ(y | x). 2. Given a CRF λ and an observation sequence x, find the most probable label sequence y = argmaxy pλ(y | x). 3. Given a set of training samples (x(k), y(k)), find the CRF parameters λ that maximize the likelihood of the training data. At least a basic attempt will be made here to explain the typical approaches for each of these problems. VII.5.2 Computing the Conditional Probability For a given x and a given position i define a |L| × |L| transition matrix Mi(x) by M i(x)[y, y′] = exp (λ · f(x, {yi = y, yi+1 = y′}, i)). Then, the conditional probability pλ(y | x) can be decomposed as pλ(y | x) = Zλ(x)−1πi=1..nM i(x)[yi, yi+1]. The normalization factor Zλ(x) can be computed by a variant of the forward– backward algorithm. The forward variables αi(x, y) and the backward variables βi(x, y), for y ∈L, can be computed using the recurrences α0(x, y) = 1, αi+1(x, y) = y′∈Lαi(x, y′)M i(y′, y, x), βn(x, y) = 1, βi−1(x, y) = y′∈LM i−1(y, y′, x)βi(x, y′). Finally, Zλ(x) = y∈Lαn(x, y). 144 Probabilistic Models for Information Extraction VII.5.3 Finding the Most Probable Label Sequence The most probable label sequence y = argmaxy pλ(y | x) can be found by a suitable adaptation of the Viterbi algorithm. Note that argmaxy pλ(y | x) = argmaxy (λ · F(x, y)) because the normalizer Zλ(x) does not depend on y. F(x, y) decomposes into a sum of terms for consecutive pairs of labels, making the task straightforward. VII.5.4 Training the CRF CRF is trained by maximizing the log-likelihood of a given training set {(x(k), y(k))}: L(λ) = k log pλ(y(k) | x(k)) = k[λ · F(x(k), y(k)) −log Zλ(x(k))]. This function is concave in λ, and so the maximum can be found at the point where the gradient L is zero: 0 = ∇L = k[F(x(k), y(k)) − yF(x(k), y)pλ(y | x(k))]. The left side is the empirical average of the global feature vector, and the right side is its model expectation. The maximum is reached when the two are equal: (∗) kF(x(k), y(k)) = k yF(x(k), y)pλ(y | x(k)). Straightforwardly computing the expectations on the right side is infeasible, because of the necessity of summing over an exponential number of label sequences y. Fortunately, the expectations can be rewritten as yF(x, y)pλ(y | x) = i=1,n y,y′∈Lpλ(yi = y, yi+1 = y′ | x)f(x, y, i), which brings the number of summands down to polynomial size. The probabilities pλ(yi = y, yi+1 = y′ | x) can be computed using the forward and backward variables: pλ(yi = y, yi+1 = y′ | x) = Z(x)−1αi(x, y)M i(y′, y, x)βi+1(x, y′). GIS can be used to solve the equation (). A particularly simple form of it further requires that the total count of all features in any training sequence be constant. If this condition does not hold, a new slack feature can be added, making the sum equal to a predefined constant S: s(x, y, i) = S − i j fj(x, y, i). If the condition holds, the parameters λ can be adjusted by λ: = λ + λ, where the ∆λ are calculated by λ j = S−1 log (empirical average of fj/ modelexpectation of fj). VII.6 Further Reading 145 VII.6 FURTHER READING Section VII.1 For a great introduction on hidden Markov models, refer to Rabiner (1986) and Rabiner (1990). Section VII.2 Stochastic context-free grammars are described in Collins (1997) and Collins and Miller (1998). Section VII.3 The following papers elaborate more on maximal entropy with regard to text processing: Reynar and Ratnaparkhi (1997); Borthwick (1999); and Charniak (2000). Section VII.4 Maximal entropy Markov models are described in McCallum et al. (2000). Section VII.5 Random markov fields are described in Kindermann and Snell (1980) and Jain and Chellappa (1993). Conditional random fields are described in Lafferty et al. (2001) and Sha and Pereira (2003). VIII Preprocessing Applications Using Probabilistic and Hybrid Approaches The related fields of NLP, IE, text categorization, and probabilistic modeling have developed increasingly rapidly in the last few years. New approaches are tried constantly and new systems are reported numbering thousands a year. The fields largely remain experimental science – a new approach or improvement is conceived and a system is built, tested, and reported. However, comparatively little work is done in analyzing the results and in comparing systems and approaches with each other. Usually, it is the task of the authors of a particular system to compare it with other known approaches, and this presents difficulties – both psychological and methodological. One reason for the dearth of analytical work, excluding the general lack of sound theoretical foundations, is that the comparison experiments require software, which is usually either impossible or very costly to obtain. Moreover, the software requires integration, adjustment, and possibly training for any new use, which is also extremely costly in terms of time and human labor. Therefore, our description of the different possible solutions to the problems described in the first section is incomplete by necessity. There are just too many reported systems, and there is often no good reason to choose one approach against the other. Consequently, we have tried to describe in depth only a small number of systems. We have chosen as broad a selection as possible, encompassing many different approaches. And, of course, the results produced by the systems are state of the art or sufficiently close to it. VIII.1 APPLICATIONS OF HMM TO TEXTUAL ANALYSIS VIII.1.1 Using HMM to Extract Fields from Whole Documents Freitag and McCallum (Freitag and McCallum 1999, 2000) implemented a fields extraction system utilizing no general-purpose NLP processing. The system is designed to solve a general problem that can be specified as follows: find the best unbroken fragment of text from a document that answers some domain-specific 146 VIII.1 Applications of HMM to Textual Analysis 147 prefix states suffix states initial state final state background state target state background state target state final state initial state Figure VIII.1. Possible topologies of a simple HMM. question. The question is stated implicitly in the form of a set of labeled training documents, each of them containing a single labeled field. For example, if the domain consists of a collection of seminar announcements, we may be interested in the location of the seminar described in a given announcement. Then the training collection should contain the labeled locations. It is of course possible to extract several fields from the same document by using several separately trained models. Each model, however, is designed to extract exactly one field from one document. The system does its task by modeling the generative process that could gen-erate the document. The HMM model used for this purpose has the following characteristics: ■The observation symbols are the words and other tokens such as numbers. ■The HMM takes an entire document as one observation sequence. ■The HMM contains two classes of states: background states and target states. The background states emit words in which we are not interested, whereas the target states emit words that constitute the information to be extracted. ■The HMM topology is predefined and only a few transitions are allowed between the states. The hand-built HMM topology is quite simple. One background state exists, which produces all irrelevant words. There are several prefix and suffix states, which are by themselves irrelevant but can provide the context for the target states. There are one or more parallel chains of target states – all of different lengths. And finally, there is an initial state and a final state. The topology has two variable parameters – the size of the context window, which is the number of prefix and suffix states, and the number of parallel paths of target states. Several examples of topologies are shown in Figures VIII.1 and VIII.2. Training such HMMs does not require using the Baum–Welsh formulas because there is only one way each training document can be generated. Therefore, the max-imum likelihood training for each state is conducted simply by counting the number of times each transition or emission occurred in all training sequences and dividing by the total number of times the state was visited. 148 Preprocessing Applications Using Probabilistic and Hybrid Approaches prefix states suffix states background state target states final state initial state Figure VIII.2. A more general HMM topology. The data sparseness problem, however, is severe – especially for the more com-plex topologies with bigger number of states. This problem is solved by utilizing the shrinkage [Crossref -> shrinkage] technique. Several possible shrinkage hierarchies were attempted. The best results were produced by shrinking straight to the simple topology shown in the left of Figure VIII.1. All prefix and suffix states are shrunk together with the background state, and all target states are also shrunk into a single target state. This simple topology is further shrunk into a single-state HMM. The system also uses a uniform level, where the root single-state HMM is further shrunk into a single-state HMM with all emission probabilities equal to each other. This uniform level does the job of smoothing the probabilities by allowing previously nonencountered tokens to have a small nonzero probability. The interpolation weights for different levels were calculated by expectation maximization, using held-out data. The system achieved some modest success in the task of extracting speaker, loca-tion, and time fields from the seminar announcements, achieving respectively 71-, 84- and 99-percent F1-measure in the best configuration, which included the window size of four as well as four parallel target paths of different sizes. VIII.1.2 Learning HMM Structure from Data The next work (Freitag and McCallum 2000) by the same authors explores the idea of automatically learning better HMM topologies. The HMM model works in the same way as the model described in the previous section. However, the HMM structure is not predefined and thus can be more complex. In particular, it is no longer true that every document can be generated by exactly one sequence of states. Therefore, Baum–Welsh formulas, adjusted for label constraints, are used for HMM parameter estimation. The optimal HMM structure for a given task is built by hill climbing in the space of all possible structures. The initial simplest structure is shown in Figure VIII.3. VIII.1 Applications of HMM to Textual Analysis 149 Figure VIII.3. Initial HMM topology. At each step, each step of the following set of operations is applied to the current model: ■Lengthen a prefix. A single state is added to the end of a prefix. The penultimate state now undergoes transition only to the new state; the new state changes to any target states to which the penultimate state previously changed. ■Split a prefix. A duplicate is made of some prefix. Transitions are duplicated so that the first and last states of the new prefix have the same connectivity to the rest of the network as the old prefix. ■Lengthen a suffix. The dual of the prefix-lengthening operation. ■Split a suffix. Identical to the prefix-splitting operation except that it is applied to a suffix. ■Lengthen a target string. Similar to the prefix lengthening operation, except that all target states, in contrast to prefix and suffix states, have self-transitions. The single target state in the simple model in Figure VIII.1 is a target string of length one. ■Split a target string. Identical to the prefix-splitting operation except that it is applied to a target string. ■Add a background state. Add a new background state to the model, with the same connectivity, with respect to the nonbackground states, as all other background states: the new state has outgoing transitions only to prefix states and incoming transitions only from suffix states. The model performing best on a separate validation set is selected for the next iteration. After 25 iterations, the best-performing (scored by three-fold cross-validation) model is selected from the set of all intermediate models as the final model. The experiments show that the models learned in this way usually outperform the simple hand-made models described in the previous section. For instance, in the domain of seminar announcements, the learned model achieves 77- and 87.5-percent F1-measure for the tasks of extracting speaker and location fields, respectively. VIII.1.3 Nymble: An HMM with Context-Dependent Probabilities A different approach was taken by BBN (Bikel et al. 1997) in the named entity extraction system Nymble (later called IdentiFinder). Instead of utilizing complex 150 Preprocessing Applications Using Probabilistic and Hybrid Approaches HMM structures to model the complexity of the problem, Nymble uses a simple, fully connected (ergodic) HMM with a single-state-per-target concept and a single state for the background. However, the emission and transition probabilities of the states are not permanently fixed but depend on the context. The system achieved a very good accuracy, outperforming the handcoded rule-based systems. Nymble contains a handcrafted tokenizer, which splits the text into sentences and the sentences into tokens. Nymble represents tokens as pairs <w,f>, where w is the lowercase version of the token and f is the token feature – a number from 1 to 14 according to the first matching description of the token in the following list: 1. digit number (01) 2. digit number (1996) 3. alphanumeric string (A34–24) 4. digits and dashes (12–16–02) 5. digits and slashes (12/16/02) 6. digits and comma (1,000) 7. digits and period (2.34) 8. any other number (100) 9. all capital letters (CLF) 10. capital letter and a period (M.) 11. first word of a sentence (The) 12. initial letter of the word is capitalized (Albert) 13. word in lower case (country) 14. all other words and tokens (;) Thefeaturesofthetokensarechooseninsuchawayastomaximizethesimilarities in the usage of tokens having the same feature. The Nymble model is designed to exploit those similarities. Note that the list of features depends on the problem domain and on the language. The list of features for different problems, different languages, or both, would be significantly different. The named entity extraction task, as in MUC evaluation (Chinchor et al. 1994: MUC), is to identify all named locations, named persons, named organizations, dates, times, monetary amounts, and percentages in text. The task can be formulated as a classification problem: given a body of text, to label every word with one of the name class tags such as Person, Organization, Location, Date, Time, Money, Percent, or Not-A-Name. Nymble utilizes an HMM model, which contains a state per each name class. There are two additional states for the beginning and the end of sentence. The HMM is fully connected (ergodic), and thus there is a nonzero probability of tran-sition from any state to any other state. The HMM topology of Nymble is shown in Figure VIII.4. Unlike the classical formulation, however, the transition and emission proba-bilities of the states in Nymble HMM depend on their context. The probability of emitting a first token in a name class is conditioned on the previous name class. The probability of emitting any other token inside a name class is conditioned on the previous token, and the probability of transition to a new name class is conditioned on the last word in the previous name class. VIII.1 Applications of HMM to Textual Analysis 151 End End Start Company Name Person Name No Name Figure VIII.4. Nymble HMM topology. Formally, such a model can be described as a classical HMM by substituting |V| new states for each nameclass state, where V is the vocabulary of the system. Each new state will emit the token it corresponds to with the probability one, and the fixed transition probabilities between the states would then be conditioned as required. The nonstandard formulation, however, allows enormously more efficient processing, cleaner formulation of the back-off models below, and the possibility of improving the system by conditioning the probabilities on additional context clues. As described earlier, there are three different classes of probabilities that the model must be able to estimate: ■The probability P(<w, f> | NC, NC−1) of generating the first token in a name class (NC) conditioned on the previous name class, ■The probability P(<w, f> | NC, <w−1, f−1>) of generating the subsequent tokens inside a name class with each token conditioned on the previous one, and ■The probability P(NC | NC−1, w−1) of transition to a new name class conditioned on the previous word. The model is trained by maximum likelihood. There is no need for Baum– Welsh reestimation because for each sentence there is only one way it can be gen-erated. Thus, the probabilities above are calculated using events/sample-size. For instance, P(< w, f > | NC, NC−1) = c(< w, f >, NC, NC−1)/c(NC, NC−1), where the c( . . . ) represents the number of occurrences of a particular event in the training data. 152 Preprocessing Applications Using Probabilistic and Hybrid Approaches The training data sparseness problem manifests itself here especially as the prob-abilities are conditioned on context. There are two separate cases: tokens that do not appear in the training data (the unknown tokens) and other events for which the training data are insufficiently representative. We deal with unknown token <w, f> robustly by substituting for it a pair < UNK , f> having the same feature and a new UNK word. Statistics for the unknown tokens are gathered in a separate model built specifically for dealing with them. The model is trained in the following way: The whole training set is divided into two halves. Then the tokens in the first half that do not appear in the second and the tokens in the second half that do not appear in the first are substituted by the UNK tokens. The unknown words model is trained on the resulting dataset. In this way, all of the training data participate. For dealing with the general data sparseness problem, several layers of backoff are employed: ■The probability of generating the first word in a name class P(<w, f> | NC, NC−1) is interpolated with P(<w, f> | NC <any>) and further with P(<w, f> | NC), with P(w | NC) · P(f | NC), and with |V|−1|F|−1. ■The probability of generating subsequent tokens P(<w, f> | NC, <w−1, f−1>) is interpolated with P(<w, f> | NC), with P(w | NC) · P(f | NC), and with |V|−1|F|−1. ■The transition probability P(NC | NC−1, w−1) is interpolated with P(NC | NC−1), with P(NC), and with 1/(number of name classes). The weights for each back-off model are computed on the fly, using the following formula: λ = 1 −c(Y) bc(Y) 1 1 + #(Y) bc(Y) , where c(Y) the count of event Y according to the full model, bc(Y) is the count of event Y according to the backoff model, and #(Y) is the number of unique outcomes of Y. This λ has two desirable properties. If the full model and the backoff have similar levels of support for an event Y, then the λ will be close to zero and the full model will be used. The number of unique outcomes is a crude measure of uniformity, or uncertainty, of the model. The more uncertainty the model has, the lower is the confidence in the backoff model, the lower λ is then used. The experimental evaluation of the Nymble system showed that, given sufficient training, it performs comparably to the best hand-crafted systems (94.9% versus 96.4% F1-measure) for the mixed-case English Wall Street Journal documents and significantly outperforms them for the more difficult all-uppercase and speech-form (93.6% and 90.7% versus 89% and 74%, respectively). VIII.2 USING MEMM FOR INFORMATION EXTRACTION Very recently the conditional models trained using the maximal entropy approach received much attention. The reason for preferring them over the more traditional generative models lies in their ability to make use of arbitrary features of the VIII.3 Applications of CRFs to Textual Analysis 153 observations, possibly overlapping and interdependent, in a consistent and math-ematically clean way. The MEMM is one formalism developed in McCallum et al. (2000) that allows the power of the ME approach to be used. They tested their implementation of MEMMs on the problem of labeling the lines in a long multipart FAQ file according to their function as a head, a question, an answer, and a tail. The problem is especially well suited for a conditional model because such a model can consider each line a single observation unit described by its features. In contrast, a generative model like HMM would have to generate the whole line (i.e., to estimate its probability), which is clearly infeasible. The 24 binary features (trigger constraint functions) used for classifying lines in the particular problem are shown below: begins-with-number contains-question-mark begins-with-ordinal contains-question-word begins-with-punctuation ends-with-question-mark begins-with-question-word first-alpha-is-capitalized begins-with-subject indented blank indented-1-to-4 contains-alphanum indented-5-to-10 contains-bracketed-number more-than-one-third-space contains-http only-punctuation contains-non-space prev-is-blank contains-number prev-begins-with-ordinal contains-pipe shorter-than-30 As can be seen, the features of a line do not define the line completely, nor are they independent. The MEMM was compared with three other learners: ■Stateless ME classifier, which used the 24 features to classify each line separately. ■Traditional, fully connected HMM with four states emitting individual tokens. Similar four-state HMM emitting individual features. ■Each line was converted to a sequence of features before training and testing. It was found that MEMM performed best of all four, and Feature HMM was second but had significantly worse performance. The other two models functioned poorly. VIII.3 APPLICATIONS OF CRFs TO TEXTUAL ANALYSIS VIII.3.1 POS-Tagging with Conditional Random Fields CRFs were developed in Lafferty et al. (2001) as a conditional ME–based version of HMM, which does not suffer from label bias problems. Lafferty et al. applied the CRF formalism to POS tagging in Penn treebank style and compared its performance with that of HMM and MEMM. In the first set of experiments, the two types of features were introduced – tag– word pairs, and tag–tag pairs corresponding to HMM observation and transition fea-tures. The results are consistent with the expectations: HMM outperforms MEMM as 154 Preprocessing Applications Using Probabilistic and Hybrid Approaches a consequence of the label bias problem, whereas CRF and HMM perform similarly with CRF slightly better overall but slightly worse for out-of-vocabulary words. In the second set of experiments, a set of simple morphological features was added: whether a word begins with a digit or uppercase letter, whether it contains a hyphen, and whether it ends in one of the following suffixes: -ing -ogy -ed -s -ly -ion -tion -ity -ies. Here the results also confirm the expectations: Both CRF and MEMM benefit significantly from the use of these features – especially for out-of-vocabulary words. VIII.3.2 Shallow Parsing with Conditional Random Fields Shallow parsing is another sequence labeling problem. The task is to identify the non-recursivecoresofvarioustypesofphrases.Theparadigmaticshallowparsingproblem is NP chunking, finding the nonrecursive cores of noun phrases, the base NPs. Sha and Pereira (2003) adapt CRFs to this problem and show that it beats all known single-model NP chunkers, performing at the level of the best known chunker – voting arrangement of 24 forward- and backward-looking SVM classifiers. The input to an NP chunker consists of a sentence labeled with POS tags. The chunker’s task is to further label each word indicating whether the word is (O)utside the chunk, (B)egins a chunk, or (C)ontinues a chunk. The chunking CRF in Sha and Pereira (2003) has a second-order Markov depen-dency between chunk tags. This is encoded by making the labels of CRF pairs of consecutive chunk tags. That is, the label at position i is yi = ci−1ci where ci is the chunk tag of word i, one of O, B, or C. Because B must be used to start a chunk, the label OC is impossible. In addition, successive labels are constrained. These con-traints on the model topology are enforced by giving appropriate features a weight of –∞, forcing all the forbidden labelings to have zero probability. The features of the chunker CRF are represented as f (x, y, i) = g(x, i)h(yi, yi+1), where g(x, i) is a predicate on the input sequence and position, and h(yi yi+1) is a predicate on pairs of labels. The possibilities for the predicates are as follows: g(x, i) true wi=w wi−1=w wi+1=w wi−2=w wi+2=w (wi=w) and (wi−1=w′) (wi=w) and (wi+1=w′) ti=t ti−1=t ti+1=t ti−2=t ti+2=t (ti=t) and (ti−1=t′) (ti=t) and (ti+1=t′) (ti−1=t) and (ti−2=t′) (ti+1=t) and (ti+2=t′) (ti=t) and (ti−1=t′) and (ti−2=t′′) (ti=t) and (ti−1=t′) and (ti+1=t′′) (ti=t) and (ti+1=t′) and (ti+2=t′′) h(yi, yi+1) yi=y (yi=y) and (yi+1=y′) c(yi)=c. VIII.4 TEG: Using SCFG Rules for Hybrid Statistical–Knowledge-Based IE 155 The wi, ti, yi mean, respectively, the word, the POS tag, and the label at position i; c(yi) means the chunk tag, and thus c(OB) = B. The w, w′, t, t′, t′′, y, y′, c are specific words, tags, labels, and chunk tags chosen from the vocabulary generated by the training data. A Gaussian weight prior was used to reduce overfitting, and thus the log-likelihood of the training data was taken as L(λ) = k[λ · F(x(k), y(k)) −log Zλ(x(k))] −||λ||2/2σ 2. The experimental evaluation demonstrates the state-of-the-art performance of the CRF chunk tagger. Interestingly, the GIS training method was shown to perform less well than some other general-purpose convex optimization algorithms – especially when many correlated features are involved. The convergence rate of GIS turns out to be much slower. VIII.4 TEG: USING SCFG RULES FOR HYBRID STATISTICAL–KNOWLEDGE-BASED IE Another approach has been described that employs a hybrid statistical and knowledge-based information extraction model able to extract entities and relations at the sentence level. The model attempts to retain and improve the high accuracy levels of knowledge-based systems while drastically reducing the amount of manual labor by relying on statistics drawn from a training corpus. The implementation of the model, called trainable extraction grammar (TEG), can be adapted to any IE domain by writing a suitable set of rules in a SCFG-based extraction language and training them using an annotated corpus. The system does not contain any purely linguistic components such as a POS tagger or parser. We demonstrate the performance of the system on several named entity extraction and relation extraction tasks. The experiments show that our hybrid approach outperforms both purely statistical and purely knowledge-based systems and require orders-of-magnitude less manual rule writing and smaller amounts of training data. The improvement in accuracy is slight for named entity extraction tasks and more pronounced for relation extraction. By devoting some attention to the details of TEG, we can provide a concrete sense of how hybrid-type systems can be employed for text mining preprocessing operations. VIII.4.1 Introduction to a Hybrid System The knowledge engineering (mostly rule-based) systems traditionally were the top performers in most IE benchmarks such as MUC (Chinchor, Hirschman, and Lewis 1994), ACE (ACE 2004), and the KDD CUP (Yeh and Hirschman 2002). Recently, though, the machine learning systems became state of the art – especially for simpler tagging problems such as named entity recognition (Bikel, Schwartz, and Weischedel 1999) or field extraction (McCallum et al. 2000). Still, the knowledge engineering approach retains some of its advantages. It is focused around manually writing patterns to extract the entities and relations. The patterns are naturally accessible to human understanding and can be improved in a controllable way, but improving the results of a pure machine learning system would 156 Preprocessing Applications Using Probabilistic and Hybrid Approaches require providing it with additional training data. However, the impact of adding more data soon becomes infinitesimal, whereas the cost of manually annotating the data grows linearly. TEG is a hybrid entities and relations extraction system, which combines the power of knowledge-based and statistical machine learning approaches. The system is based on SCFGs. The rules for the extraction grammar are written manually, and the probabilities are trained from an annotated corpus. The powerful disambiguation ability of PCFGs allows the knowledge engineer to write very simple and naive rules while retaining their power, thus greatly reducing the required labor. In addition, the size of the needed training data is considerably smaller than that of the training data needed for a pure machine learning system (for achieving comparable accuracy results). Furthermore, the tasks of rule writing and corpus annotation can be balanced against each other. VIII.4.2 TEG: Bridging the Gap between Statistical and Rule-Based IE Systems Although the formalisms based on probabilistic finite-state automata are quite suc-cessful for entity extraction, they have shortcomings that make them harder to use for the more difficult task of extracting relationships. One problem is that a finite-state automaton model is flat, and so its natural task is assignment of a tag (state label) to each token in a sequence. This is suitable for the tasks in which the tagged sequences do not nest and there are no explicit relations between the sequences. Part-of-speech tagging and entity extraction tasks belong to this category, and indeed the HMM-based POS taggers and entity extractors are state of the art. Extracting relationships is different because the tagged sequences can and must nest and there are relations between them, which must be explicitly recognized. Although it is possible to use nested automata to cope with this problem, we felt that using a more general context-free grammar formalism would allow for greater generality and extendibility without incurring any significant performance loss. VIII.4.3 Syntax of a TEG Rulebook ATEGrulebookconsistsofdeclarationsandrules.Rulesbasicallyfollowtheclassical grammar rule syntax with a special construction for assigning concept attributes. Notation shortcuts like [ ] and | can be used for easier writing. The nonterminals referred by the rules must be declared before usage. Some of them can be declared as output concepts, which are the entities, events, and facts that the system is designed to extract. Additionally, two classes of terminal symbols also require declaration: termlists and ngrams. A termlist is a collection of terms from a single semantic category written either explicitly or loaded from external source. Examples of termlists are countries, cities, states, genes, proteins, people’s first names, and job titles. Some linguistic concepts such as lists of propositions can also be considered termlists. Theoretically, a termlist is equivalent to a nonterminal symbol that has a rule for every term. VIII.4 TEG: Using SCFG Rules for Hybrid Statistical–Knowledge-Based IE 157 An ngram is a more complex construction. When used in a rule, it can expand to any single token. The probability of generating a given token, however, is not fixed in the rules but learned from the training dataset and may be conditioned on one or more previous tokens. Thus, using ngrams is one of the ways the probabilities of TEG rules can be context-dependent. The exact semantics of ngrams is explained in the next section. Let us see a simple meaningful example of a TEG grammar: output concept Acquisition(Acquirer, Acquired); ngram AdjunctWord; nonterminal Adjunct; Adjunct:- AdjunctWord Adjunct | AdjunctWord; termlist AcquireTerm = acquired bought (has acquired) (has bought); Acquisition :- Company →Acquirer [”,”Adjunct ”,”] AcquireTerm Company →Acquired; The first line defines a target relation Acquisition, which has two attributes, Acquirer and Acquired. Then an ngram AdjunctWord is defined followed by a non-terminal Adjunct, which has two rules separated by “|” that together define Adjunct as a sequence of one or more AdjunctWord-s. Then a termlist AcquireTerm is defined containing the main acquisition verb phrase. Finally, the single rule for the Acqui-sition concept is defined as a Company followed by optional Adjunct delimited by commas that are followed by AcquireTerm and a second Company. The first Com-pany is the Acquirer attribute of the output frame and the second is the Acquired attribute. The final rule requires the existence of a defined Company concept. The following set of definitions identifies the concept in a manner emulating the behavior of an HMM entity extractor: output concept Company(); ngram CompanyFirstWord; ngram CompanyWord; ngram CompanyLastWord; nonterminal CompanyNext; Company:- CompanyFirstWord CompanyNext | CompanyFirstWord; CompanyNext:- CompanyWord CompanyNext | CompanyLastWord; Finally, in order to produce a complete grammar, we need a starting symbol and the special nonterminal that would match the strings that do not belong to any 158 Preprocessing Applications Using Probabilistic and Hybrid Approaches of the output concepts: start Text; nonterminal None; ngram NoneWord; None:- NoneWord None | ; Text:- None Text | Company Text | Acquisition Text; These 20 lines of code are able to find a fair number of Acquisitions accurately after very modest training. Note that the grammar is extremely ambiguous. An ngram can match any token, and so Company, None, and Adjunct are able to match any string. Yet, using the learned probabilities, TEG is usually able to find the correct interpretation. VIII.4.4 TEG Training CurrentlytherearethreedifferentclassesoftrainableparametersinaTEGrulebook: the probabilities of rules of nonterminals, the probabilities of different expansions of ngrams, and the probabilities of terms in a wordclass. All those probabilities are smoothed maximum likelihood estimates calculated directly from the frequencies of the corresponding elements in the training dataset. For example, suppose we have the following simple TEG grammar that finds simple person names: nonterm start Text; concept Person; ngram NGFirstName; ngram NGLastName; ngram NGNone; termlist TLHonorific = Mr Mrs Miss Ms Dr; (1) Person :- TLHonorific NGLastName; (2) Person :- NGFirstName NGLastName; (3) Text :- NGNone Text; (4) Text :- Person Text; (5) Text :-; By default, the initial untrained frequencies of all elements are assumed to be 1. They can be changed using “<count>” syntax, an example of which is shown below. The numbers in parentheses on the left side are not part of the rules and are used only for reference. Let us train this rulebook on the training set containing one sentence: Yesterday, <person> Dr Simmons, </person> the distinguished scientist, pre-sented the discovery. VIII.4 TEG: Using SCFG Rules for Hybrid Statistical–Knowledge-Based IE 159 The difference is in the expansion of the Person nonterminal. Both Person rules can produce the output instance; therefore, there is an ambiguity. This is done in two steps. First, the sentence is parsed using the untrained rulebook but with the constraints specified by the annotations. In our case the constraints are satisfied by two different parses that are shown in Figure VIII.5 (the numbers below the nonterminals refer to the rules used to expand them): The ambiguity arises because both TLHonorific and NGFirstName can generate the token “Dr.” In this case the ambiguity is resolved in favor of the TLHonorific interpretation because in the untrained rulebook we have P (Dr | TLHonorific) = 1/5 (choice of one term among five equiprobable ones), P (Dr | NGFirstName) ≈1/N, where N is the number of all known words (untrained ngram behavior). After the training, the frequencies of the different elements are updated, which produces the following trained rulebook (only lines that were changed are shown). Note the “<Count>” syntax: termlist TLHonorific = Mr Mrs Miss Ms <2> Dr; Person :- <2>TLHonorific NGLastName; Text :- <11>NGNone Text; Text :- <2>Person Text; Text :- <2>; Additionally, the training will generate a separate file containing the statistics for the ngrams. It is similar but more complex because the bigram frequencies, token feature frequencies, and unknown word frequencies are taken into consideration. In order to understand the details of ngrams training it is necessary to go over the details of their internal working. An ngram always generates a single token. Any ngram can generate any token, but naturally the probability of generating one depends on the ngram, on the token, and on the immediate preceding context of the token. This probability is calculated at the runtime using the following statistics: Freq() = total number of times the ngram was encountered in the training set. Freq(W), Freq(F), Freq(T) = number of times the ngram was matched to the word W, the feature F, and the token T, respectively. Note that a token T is a pair consisting of a word W(T) and its feature F(T). Freq(T | T2) = number of times token T was matched to the ngram in the training set and the preceding token was T2. Freq( | T2) = number of times the ngram was encountered after the token T2. , Text , 3 3 4 1 3 , 3 3 4 2 3 NGNone Text Yesterday Text NGNone Text Person TLHonorific NGLastName Dr Simmons Text NGNone Text NGNone Yesterday Text Text NGNone Text Person NGFirstName NGLastName Dr Simmons Text NGNone , Figure VIII.5. Possible parse trees. 160 VIII.4 TEG: Using SCFG Rules for Hybrid Statistical–Knowledge-Based IE 161 Thus,ontheassumptionallthosestatisticsaregathered,theprobabilityofthengram’s generating a token T given that the preceding token is T2 is estimated as P(T|T2) = 1/2 · Freq(T|T2)/Freq(∗|T2) + 1/4 · Freq(T)/Freq(∗) + 1/4 · Freq(W) · Freq(F)/Freq(∗)2. This formula linearly interpolates between the three models: the bigram model, the backoff unigram model, and the further backoff word+feature unigram model. The interpolation factor was chosen to be 1/2, which is a natural choice. The experiments have shown, however, that varying the λ’s in reasonable ranges does not significantly influence the performance. Finally, matters are made somewhat more complicated by the unknown tokens. That a token was never encountered during the training gives by itself an important clue to the token’s nature. In order to be able to use this clue, the separate “unknown” model is trained. The training set for it is created by dividing the available training data into two halves and treating one-half of the tokens, which are not present in the other half, as special “unknown” tokens. The model trained in this way is used whenever an unknown token is encountered during runtime. VIII.4.5 Additional features There are several additional features that improve the system and help to customize it for other domains. First, the probabilities of different rules of a nonterminal need not be fixed but may depend on their context. Currently, the rules for a specific nontermi-nal can be conditioned on the previous token in a way similar to the dependencey of ngram probabilities on the previous token. Other conditioning is of course possible – even to the extent of using maximal entropy for combining several conditioning events. Second, an external tokenizer, token feature generator, or both can be substituted fortheregularone.Itisevenpossibletouseseveralfeaturegeneratorssimultaneously (different ngrams may use different token feature sets). This is useful for languages other than English as well as for special domains. For instance, in order to extract the names of chemical compounds or complex gene names it may be necessary to provide a feature set based on morphological features. In addition, an external part-of-speech tagger or shallow parser may be used as a feature generator. For real-life IE tasks it is often necessary to extract very rare target concepts. This is especially true for relations. Although there could be thousands of Persons or Organizations in a dataset, the number of Acquisitions could well be less than 50. The ngrams participating in the rules for such concepts will surely be undertrained. In order to alleviate this problem, the shrinkage technique can be used. An infrequent specific ngram can be set to shrink to another more common and more general ngram. Then the probability of generating a token by the ngram is interpolated with the corresponding probability for the more common “parent” ngram. A similar technique was used with a great success for HMM, and we found it very useful for TEG as well. 162 Preprocessing Applications Using Probabilistic and Hybrid Approaches VIII.4.6 Example of Real Rules This section demonstrates a fragment of the true rules written for the extraction of the PersonAffiliation relation from a real industry corpus. The fragment shows a usage of the advanced features of the system and gives another glimpse of the flavor of rule writing in TEG. The PersonAffiliation relation contains three attributes – name of the person, name of the organization, and position of the person in the organization. It is declared as follows: concept output PersonAffiliation(Name, Position, Org); Most often, this relation is encountered in the text in the form “Mr. Name, Position of Org” or “Org Position Ms. Name.” Almost any order of the components is possible with commas and prepositions inserted as necessary. Also, it is common for Name, Position, or both to be conjunctions of pairs of corresponding entities: “Mr. Name1 and Ms. Name2, the Position1 and Position2 of Org,” or “Org’s Position1 and Position2, Ms. Name.” In order to catch those complexities, and for general simplification of the rules, we use several auxiliary nonterms: Names, which catches one or two Names; Positions, which catches one or two Positions; and Orgs, which catches Organizations and Locations. These can also be involved in PersonAffiliation as in “Bush, president of US”: nonterms Names, Positions, Orgs; Names :- PERSON->Name | PERSON->Name “and” PERSON->Name; Positions :- POSITION->Position | POSITION->Position “and” POSITION-> Position; Orgs :- ORGANIZATION->Org | LOCATION->Org; We also use auxiliary nonterms that catch pairs of attributes: PosName, and PosOrg: nonterms PosName, PosOrg; PosName :- Positions Names | PosName “and” PosName; wordclass wcPreposition = “at” “in” “of” “for” “with”; wordclass wcPossessive = (“’ ” “s”) “’ ”; PosOrg :- Positions wcPreposition Orgs; PosOrg :- Orgs [wcPossessive] Positions; Finally, the PersonAffiliation rules are as follows: PersonAffiliation :- Orgs [wcPossessive] PosName; PersonAffiliation :- PosName wcPreposition Orgs; PersonAffiliation :- PosOrg [“,”] Names; PersonAffiliation :- Names “,” PosOrg; PersonAffiliation :- Names “is” “a” PosOrg; The rules above catch about 50 percent of all PersonAffiliation instances in the texts. Other instances depart from the form above in several respects. Thus, in order to improve the accuracy, additional rules need to be written. First, the Organization name is often entered into a sentence as a part of a descriptive noun phrase as in “Ms. Name is a Position of the industry leader Org.” To catch this in a general way, we define an OrgNP nonterm, which uses an external POS tagger: VIII.4 TEG: Using SCFG Rules for Hybrid Statistical–Knowledge-Based IE 163 ngram ngOrgNoun featureset ExtPoS restriction Noun; ngram ngOrgAdj featureset ExtPoS restriction Adj; ngram ngNum featureset ExtPoS restriction Number; ngram ngProper featureset ExtPoS restriction ProperName; ngram ngDet featureset ExtPoS restriction Det; ngram ngPrep featureset ExtPoS restriction Prep; nonterm OrgNounList; OrgNounList :- ngOrgNoun [OrgNounList]; nonterms OrgAdjWord, OrgAdjList; OrgAdjWord :- ngOrgAdj | ngNum | ngProper; OrgAdjList :- OrgAdjWord [OrgAdjList]; nonterm OrgNP; OrgNP :- [ngDet] [OrgAdjList] OrgNounList; OrgNP :- OrgNP ngPrep OrgNP; OrgNP :- OrgNP “and” OrgNP; The external POS tagger provides an alternative token feature set, which can be used by ngrams via the ngram featureset declaration. The restriction clause in the ngram declaration specifies that the tokens matched by the ngram must belong to the specified feature. Altogether, the set of rules above defines an OrgNP nonterm in a way similar to the to the definition of a noun phrase by a syntax-parsing grammar. To use the nonterm in the rules, we simply modify the Orgs nonterm: Orgs :- [OrgNP] ORGANIZATION->Org | LOCATION->Org; Note that, although OrgNP is internally defined very generally (it is able to match any noun phrase whatsoever), the way it is used is very restricted. During training, the ngrams of OrgNP learn the distributions of words for this particular use, and, during the run, the probability that OrgNP will generate a true organization-related noun phrase is much greater than for any other noun phrase in text. Finally, we demonstrate the use of ngram shrinkage. There are PersonAffiliation instances in which some irrelevant sentence fragments separate the attributes. For example, “‘ORG bla bla bla’, said the company’s Position Mr. Name.” In order to catch the “bla bla bla” part we can use the None nonterm, which generates all irrelevant fragments in the text. Alternatively, we can create a separate ngram and a nonterm for the specific use of catching irrelevant fragments inside PersonAffiliation. Both these solutions have their disadvantages. The None nonterm is too general and does not catch the specifics of the particular case. A specific nonterm, on the other hand, is very much undertrained. The solution is to use a specific nonterm but to shrink its ngram to None: nonterm BlaBla; ngram ngBlaBla -> ngNone; BlaBla :- ngBlaBla [BlaBla]; PersonAffiliation :- Orgs BlaBla PosName; The rules described above catch 70 percent of all PersonAffiliation instances, which is already a good result for relationship extraction from a real corpus. The process of writing rules, moreover, can be continued to further improve the accuracy. 164 Preprocessing Applications Using Probabilistic and Hybrid Approaches Recall Prec F1 Recall Prec F1 Recall Prec F1 Recall Prec F1 Person 86.91 85.1 86.01 86.31 86.83 86.57 81.32 93.75 87.53 93.75 90.78 92.24 Organization 87.94 89.8 88.84 85.94 89.53 87.7 82.74 93.36 88.05 89.49 90.9 90.19 Location 86.12 87.2 86.66 83.93 90.12 86.91 91.46 89.53 90.49 87.05 94.42 90.58 HMM Emulation using TEG Manual Rules Full TEG system Figure VIII.6. Accuracy results for MUC-7. VIII.4.7 Experimental Evaluation of TEG The TEG techniques were evaluated using two corpora: MUC-7 and ACE-2. The results show the potential of utilizing hybrid approaches for text mining preprocessing. The MUC-7 Corpus Evaluation – Comparison with HMM-based NER The MUC-7 named-entity recognition (NER) corpus consists of a set of news articles related to aircraft accidents, containing about 200 thousand words with the named entities manually categorized into three basic categories: PERSON, ORGANIZATION, and LOCATION. Some other entities are also tagged such as dates, times, and monetary units, but they did not take part in our evaluation. The corpus does not contain tagged relationships, and thus it was used to evaluate the difference in the performance between the four entity extractors: the regular HMM, its emulation using TEG, a set of handcrafted rules written in DIAL, and a full TEG system, which consists of the HMM emulation augmented by a small set of handcrafted rules (about 50 lines of code added). The results of the experiments are summarized in Figure VIII.6: The small accu-racy difference between the regular HMM and its emulation is due to slight differ-ences in probability conditioning methods. It is evident that the handcrafted rules performed better than the HMM-based extractors but were inferior to the perfor-mance of the TEG extractor. Significantly, the handcrafted rules achieved the best precision; however, their recall was far worse. The HMM named-entity recognition results published in Bikel et al. (1997) are somewhat higher than we were able to produce using our version of an HMM entity extractor. We hypothesize that the reason for the difference is the use of additional training data in the Nymble experiments. The paper (Bikel et al. 1997) mentions using approximately 750K words of training data, whereas we had only 200K. Regardless of the reasons for the difference, the experiment clearly shows that the addition of a small number of handcrafted rules can further improve the results of a purely automatic HMM-based named-entity extraction. ACE-2 Evaluation: Extracting Relationships The ACE-2 was a follow-up to ACE-1 and included tagged relationships in addition to tagged entities. The ACE-2 annotations are more complex than those supported by the current version of our system. Most significantly, the annotations resolve all anaphoric references, which is outside the scope of the current implementation. Therefore, it was necessary to remove annotations containing anaphoric references. This was done automatically using a simple Perl script. VIII.4 TEG: Using SCFG Rules for Hybrid Statistical–Knowledge-Based IE 165 Full TEG system (with 7 ROLE rules) Recall Prec F Role 83.44 77.30 80.25 Person 89.82 81.68 85.56 Organization 59.49 71.06 64.76 GPE 88.83 84.94 86.84 HMM entity extractorMarkovian SCFG Recall Prec F Recall Prec F Role 67.55 69.86 68.69 Person 85.54 83.22 84.37 89.19 80.19 84.45 Organization52.62 64.735 58.05 53.57 67.46 59.71 GPE 85.54 83.22 84.37 86.74 84.96 85.84 Figure VIII.7. Accuracy results for ACE-2. For evaluating relationship extraction we choose the ROLE relation (ACE 2002). The original ACE-2 annotations make finer distinctions between the different kinds of ROLE, but for the current evaluation we felt it sufficient just to recognize the basic relationships and find their attributes. The results of this evaluation are shown in Figure VIII.7. For comparison we also show the performance of the HMM entity extractor on the entities in the same dataset. As expected, the accuracy of a purely Markovian SCFG without additional rules is rathermediocre.However,byaddingasmallnumberofhandcraftedrules(altogether about 100 lines of code), accuracy was raised considerably (by 15% in F1). The performances of the three systems on the named entities differ very little because they are essentially the same system. The slight improvement of the full TEG system is due to better handling of the entities that take part in ROLEs. In Figure VIII.8 we can see how the accuracy of the TEG system changes as a function of the amount of available training data. There are three graphs in the figure: a graph that represents the accuracy of the grammar with no specific ROLE rules, a graph that represents the accuracy of the grammar with four ROLE rules, and finally a graph that represents the accuracy of the grammar with seven ROLE rules. Analysis of the graphs reveals that, to achieve about 70-percent accuracy the sys-tem needs about 125K of training data when using all of the specific ROLE rules, whereas 250k of training data are needed when no specific rules are present. Thus, adding a small set of simple rules may save 50 percent of the training data require-ments. The seven ROLE rules used by the third TEG are shown below. The rules use nonterminals and wordclasses, which are defined in the rest of the grammar. The whole grammar, which has a length of about 200 lines, is too long to be included here. 1. ROLE :- [Position Before] ORGANIZATION->ROLE 2 Position [“in” GPE] [“,”] PERSON→ROLE 1; 2. ROLE :- GPE→ROLE 2 Position [“,”] PERSON→ROLE 1; 3. ROLE :-PERSON→ROLE 1 “of” GPE→ROLE 2; 166 Preprocessing Applications Using Probabilistic and Hybrid Approaches 0.00 20.00 40.00 60.00 80.00 100.00 50K 100K 150K 200K 250K 0 rules 4 rules 7 rules Figure VIII.8. Accuracy (F1) of the TEG system (with different grammars) as a function of the size of the training corpus (ACE-2). 4. ROLE :- ORGANIZATION→ROLE 2 “’” “s” [Position] PERSON→ROLE 1; 5. ROLE :- GPE→ROLE 2 [Position] PERSON→ROLE 1; 6. ROLE :- <5> GPE->ROLE 2 “’” “s”” ORGANIZATION→ROLE 1; ROLE :- PERSON→ROLE 1 “,” Position WCPreposition ORGANIZATION→ROLE 2; VIII.5 BOOTSTRAPPING VIII.5.1 Introduction to Bootstrapping: The AutoSlog-TS Approach One of the main problems of the machine learning–based systems is that they rely on annotated corpora. A bootstrapping approach to IE takes a middle ground between the knowledge engineering and machine learning approaches. The main idea behind this approach is that the user provides some initial bias either by supplying a small initial lexicon or a small number of rules for inducing the initial examples. The boot-strapping approach attempts to circumvent the need for an annotated corpus, which can be very expensive and time consuming to produce. One of the first approaches to bootstrapping was developed by Ellen Riloff and implemented in the AutoSlog-TS system (Riloff 1996a). Based on the original AutoSlogsystem developed previously by Riloff (Riloff 1993a), AutoSlog-TS uses a set of documents split into two bins: interesting documents and noninteresting doc-uments. In contrast, the original AutoSlog required all relevant noun phrases within the training corpus to be tagged and, hence, put a much bigger load on the task of the training corpus construction. Palka (Kim and Moldovan 1995) was another system similar to AutoSlog, but it required a much heavier tagging in the training corpus: VIII.5 Bootstrapping 167 Sentence Analyzer S: World Trade Center V: was bombed PP: by terrorists AutoSlog Heuristics Extraction Patterns <w> was killed bombed by <y> Sentence Analyzer Extraction Patterns <w> was killed <x> was bombed bombed by <y> <z> saw EP REL % <x> was bobmed 87% bombed by <y> 84% <w> was killed 63% <z> saw 49% Figure VIII.9. Flow of the AutoSlog-TS system. Each frame had to be fully tagged, and an ontology had to be provided along with the related lexicons. AutoSlog-TS starts by using a parser that analyzes the sentences, determines clauseboundaries,andmarkssubjects,verbs,directobjects,andprepositionalphrases of each clause. It then uses a set of extraction pattern templates and generates an extraction pattern for each noun phrase in the corpus. The extraction patterns are graded by using the two bins of documents provided by the user. Extraction patterns that appear mostly in the bin of the important documents are ranked higher. An example of the flow of the AutoSlog-TS system is shown in Figure VIII.9. The main steps within AutoSlog-TS can be broken down as follows: 1. The user provides two sets of documents, interesting (I) and noninteresting (N). 2. Shallow Parsing is performed for all the documents, and, on the basis of the predefined templates all patterns that match one of the templates are extracted (EP). 3. For each extraction pattern in EP, we compute the relevance of the pattern: Rel(Pat) = Pr(D ∈I|Pat ∈D) = #(I, Pat) #(I  N), where #(I, Pat) is the number of documents in the document collection I that contain pattern P . 4. We compute the importance of each extraction pattern in EP according to the following formula and rank them in a decreased order: Imp(Pat) = Rel(Pat) log2(#(D, Pat)). 5. The system presents the top-ranked rules to the user for evaluation. 168 Preprocessing Applications Using Probabilistic and Hybrid Approaches <subj> exploded Murder of <np> Assassination of <np> <subj> was killed <subj> was kidnapped Attack on <np> <subj> was injured Exploded in <np> Death of <np> <subj> took place Caused <dobj> Claimed <dobj> <subj> was wounded <subj> occurred <subj> was loctated Took place on <np> Responsibility for <np> Occurred on <np> Was wounded in <np> Destroyed <dobj> <subj> was murdered One of <np> <subj> kidnapped Exploded on <np> Figure VIII.10. Table of the top 24 extraction patterns in the AutoSlog-TS evaluation. The system was evaluated on MUC-4 documents. A total of 1,500 MUC-4 documents were used, and 50 percent of them were relevant according to the user. The system generated 32,345 patterns and after patterns supported only by one document were discarded, 11,225 patterns were left. The top 24 extraction patterns are shown in Figure VIII.10. The user reviewed the patterns and labeled the ones she wanted to use for actual extraction. So, for instance, “<subj> was killed” was selected for inclusion in the extraction process, and <subj> was replaced by <victim>. It took the user 85 minutes to review the top 1,970 patterns. Certainly this approach shows much promise in building new extraction systems quickly because very little manual effort is needed in terms of rule writing and corpus annotation. The primary drawback is that a fairly strong parser needs to be used for analyzing the candidate sentences. VIII.5.2 Mutual Bootstrapping Riloff and Jones (Riloff and Jones 1999) took this idea of bootstrapping even further by suggesting mutual bootstrapping. Here the starting point is a small lexicon of entities (seed) that share the same semantic category. InawaysimilartoAutoSlog-TS,thecorpusisprocessedandallpossibleextraction patterns are generated along with the noun phrases that are extracted by them. The main purpose of this approach is to extend the initial lexicon and to learn accurate extraction patterns that can extract instances for the lexicon. Initialization ■N = total number of extraction patterns ■EPi = one extraction pattern (i = 1..N) ■EPData = a list of pairs (EPi, Noun Phrases generated by the EPi) ■SemLex = the list of seed words (the initial lexicon) ■EPlist = {} Loop 1. Score all extraction patterns in EPData : Find for each EPi how many items from SemLex it can generate. VIII.5 Bootstrapping 169 www location www company terrorism location offices in <x> owned by <x> living in <x> facilities in <x> <x> employed traveled to <x> operations in <x> <x> is distributor become in <x> operates in <x> <x> positioning Sought in <x> seminars in <x> motivated <x> presidents of <x> activities in <x> sold to <x> parts of <x> consulting in <x> Devoted to <x> To enter <x> outlets in <x> <x> thrive ministers of <x> customers in <x> Message to <x> part in <x> distributors in <x> <x> request information taken in <x> services in <x> <x> has positions returned to <x> expanded into <x> offices of <x> process in <x> Figure VIII.11. Table of extraction patterns from mutual bootstrapping. 2. Best EP = highest scoring extraction pattern (extracted the highest number of items from SemLex) 3. Add Best EP to EPList 4. Add Best EP’s extractions to SemLex 5. Goto 1 The top 12 extraction patterns in each of 3 problems (locations mentioned in company home pages, company names mentioned in company home pages, and locations mentioned in terrorist-related documents) are shown in Figure VIII.11. VIII.5.3 Metabootstrapping One of the main problems encountered with mutual bootstrapping is that once a word is added to the lexicon that does not belong to the semantic category, a domino effect can be created, allowing incorrect extraction patterns to receive high scores and thus adding many more incorrect entries to the lexicon. To prevent this problem, Riloff and Jones suggest using another method called metabootstrapping, which allows finer grain control over the instances that are added to the lexicon. In metabootstrapping, only the top five instances that are extracted by using the best extraction pattern are retained and added to the permanent semantic lexicons. All other instances are discarded. The instances are scored by counting, for each instance, how many extraction patterns can extract it. Formally, the score of instance Ij is computed as follows: score(Ij) = Nj k=1 1 + (.01∗Imp(Patternk)), where Nj is the number of extraction patterns that generated Ij. After the new instances are added to the permanent semantic lexicon, the mutual bootstrapping starts from scratch. A schematic view of the flow of the metabootstrap-ping process is presented in Figure VIII.12. 170 Preprocessing Applications Using Probabilistic and Hybrid Approaches Seed Words Permanent Semantic Lexicon Candidate Extraction Patterns and their Extractions Temporary Semantic Lexicon Category EP List Initialize Add 5 Best NPs Mutual Bootstrapping Figure VIII.12. Flow diagram of metabootstrapping. Evaluation of the Metabootstrapping Algorithm Two datasets were used: one of 4,160 company Web pages, and one of 1,500 articles taken from the MUC-4 corpus. Three semantic categories were extracted from the Web pages (locations, company names, and titles of people), and two semantic cate-gories were extracted from the terror-related articles (locations and weapons). The metabootstrapping algorithm was run for 50 iterations. During each iteration, the mutual bootstrapping was run until it produced 10 patterns that extracted at least one new instance that could be added to the lexicon. In Figure VIII.13, one can see how the accuracy of the semantic lexicon changes after each number of iterations. The easiest category is Web location, and the most difficult categories are weapon and Web title (titles of people mentioned on the Web page). Semantic Lexicons Accuracy 0 0.1 0.2 0.3 0.4 0.5 0.6 0.7 0.8 0.9 1 10 20 30 40 50 # of iterations Accuracy web company web location web title terrorist location weapon Figure VIII.13. Accuracy of the semantic lexicons as a function of the number of mutual bootstrapping iterations. VIII.5 Bootstrapping 171 Seed Words Proper NP Lexicon Generic NP Lexicon Syntactic Heuristics Candidate Proper NPs Candidate Generic NPs Document Collection Exclusive Non Exclusive Non Exclusive Exclusive Figure VIII.14. Heuristic-based bootstrapping. VIII.5.4 Using Strong Syntactic Heuristics Phillips and Riloff (Phillips and Riloff 2002) took a different approach to building semantic lexicons. They learned two lexicons; one contained proper noun phrases (PNP) and the other generic noun phrases (GN). The lexicons were acquired by using a set of syntactic heuristics. In particular, they used three types of patterns. The architecture of the heuristic-based bootstrapping is shown in Figure VIII.14. The first type included appositives such as “the president, George Bush,” or “Mary Smith, the analyst.” The second type consisted of IS-A clauses, which are NP followed by “to be” VP followed by NP. An example of an IS-A clause is “Bill Gates, the chairman of Microsoft.” The last type comprised compound nouns that have the form GN + PNP. An example of such a construct is “the senator John Kerry.” A mutual property of all three types is that they establish a relationship between at least one GN and one PNP. The bootstrapping algorithm will infer relationships between an element that is already in one of the lexicons and an element that is not yet in any of the lexicons. These relations enable the algorithm each time to extend either the PNP lexicon or the GN lexicon. The algorithm alternates between learning a new GN based on the PNP lexicon and learning a new PNP based on the GN lexicon. As an example, if one is trying to extend the people lexicons and we have in our PNP person lexicon the name “John Kerry” and the sentence “senator John Kerry” is encountered, one would learn that “senator” is a generic noun that stands for a 172 Preprocessing Applications Using Probabilistic and Hybrid Approaches person; it will be added to the GN person lexicon. We can now learn new names of people that come after the GN “senator.” Normally, when a proper noun phrase is added to the PNP lexicon, the full phrase is used, whereas typically a generic noun phrase is added to the GN lexicon when just the head noun is used. This is done to increase the generality of the lexicon without sacrificing accuracy. Take, for instance, the generic noun phrase “financial analyst.” It is enough just to add analyst to the GN lexicon, and no harm will result. On the other hand, consider the proper noun phrase “Santa Barbara.” Clearly, we can not add just Santa or just Barbara to the PNP lexicon of locations. One of the main problems of bootstrapping approaches in general is that some generic phrases are ambiguous and can be used with a variety of semantic classes. An example is the generic noun “leader.” This noun can designate either a company (which is a leader in its area) or a person (in the political domain or in the financial– corporate domain). If we add “leader” to the GN lexicon of people, in the next iteration it will add many corporations and contaminate our PNP people lexicon. To alleviate this problem, the authors suggested using an exclusivity measure that is attached to each of the noun phrases. Only noun phrases that have an exclusivity measure exceeding some predefined threshold are added to the lexicon. Given a phrase P and a semantic category C, Exclusivity(P, C) = #(P, C) #(P, ¬C), where #(P, C) is the number of sentences in which P is collocated with at least one member of C, and #(P, C) is the number of sentences in which P is colloca-ted with at least one member of all the semantic classes other than C. A typical exclusivity threshold is 5. VIII.5.4.1 Evaluation of the Strong Syntactic Heuristics This approach was tested on 2,500 Wall Street Journal articles (People and Organi-zations) and on a set of 1,350 press releases from the pharmacology domain (People, Organizations, and Products). The heuristics that added the highest number of entries to the PNP semantic lexicons were the compounds heuristics, whereas the appositives heuristics added the highest number of entries to the GN lexicons. The accuracy for the Wall Street Journal articles was between 80 percent and 99 percent for the PNP lexicons and between 30 and 95 percent for the GN lexicons. The accuracy results dropped when tested against the pharmaceutical press releases (77–95% for the PNP and 9–91% for the GN). VIII.5.4.2 Using Cotraining Blum and Mitchell (Blum and Mitchell 1998) introduced the notion of cotraining – a learning technique that tries to learn from a variety of views and sources simultane-ously. Clearly, because there are three heuristics for learning the semantic lexicons, cotraining can be used after each boot-strapping cycle. All three lexicons will be joined after each step, and will a richer lexicon will result for each of them. A simple filtering mechanism can be used to eliminate entries with low support. VIII.5 Bootstrapping 173 Semantic Lexicon Pattern Pool Candidate Phrase Pool Seed Phrases Extraction Patterns and Their Extractions initialize Add 5 best candidate phrases (based on score (phrase)) Select best patterns (based on RlogF (pattern)) Add extractions of best patterns Figure VIII.15. The Basilisk algorithm. It is common to add just entries supported by at least three sentences to the combined lexicon. Using the cotraining method results in a much more rapid learning of the lexicons (between 20 and 250% more entries were acquired) without much loss in accuracy. VIII.5.5 The Basilisk Algorithm Following in the footsteps of Riloff and Jones, Thelen and Riloff (Thelen and Riloff 2002) suggested a similar algorithm called Basilisk (Bootstrapping Approach to SemantIc Lexicon Induction using Semantic Knowledge). Differing from the metabootstrapping approach that uses a two-level loop (with mutual bootstrapping in the inner loop), Basilisk uses just a one-level loop and hence is more efficient. It solves the accuracy problem of the mutual bootstrapping by utilizing a weighted combination of extraction patterns. In particular, the approach utilizes 20 + i (where i is the index of the bootstrapping loop) extraction patterns as the pattern pool. The general architecture of Basilisk is shown in Figure VIII.15. RlogF(pattern) was defined when we discussed the AutoSlog system. Score of phrase PH is defined as average log of the number of valid extraction patterns (for the given semantic category). The rationale is that a pattern is more trusted if it extracts a higher number of valid members of the semantic category. The log of the number of extractions is used so that a small number of extraction patterns having a particularly high number of valid extraction will not affect the average too drastically. Formally, ■#(PHi) is the number of extraction patterns that extract phrase PHi. ■Fj = the number of valid extractions that were extracted by pattern Pj. score(PH i) = #(PH i)  j=1 log2(Fj + 1) #(PH i) (1.1) 174 Preprocessing Applications Using Probabilistic and Hybrid Approaches Note that here the assumption is that we have just one semantic category. If we are dealing with several semantic categories, then we will change score (PH i) to be score(PH i, C). VIII.5.5.1 Evaluation of Basilisk on Single-Category Bootstrapping Basiliskwascomparedagainstmetabootstrappingon1,700MUC-4documents.Inthe specific experiment performed by Thelen, just single nouns were extracted in both systems. Basilisk outperformed metabootstrapping in all six categories (building, event, human, location, time, weapon) by a considerable margin. VIII.5.5.2 Using Multiclass Bootstrapping Rather than learning one semantic category at a time, it seems that it will be beneficial to learn several semantic classes simultaneously. Clearly, the main hurdle would be those words that are polysemic and could belong to several semantic classes. To alleviate this problem we make the common assumption of “one sense per domain,” and so our task is to find a conflict resolution strategy that can decide to which category each polysemic word should belong. The conflict resolution strategy used by Thelen preferred the semantic category assigned in a former iteration of the boot-strapping algorithm to any given phrase, and if two categories are suggested during the same iteration the category for which the phrase got the higher score is selected. Another change that was able to boost the results and distinguish between the competing categories is to use mscore(PHi, Ca), as defined below, rather than score(PHi, Ca), as in equation (1.1). mscore(PH i, Ca) = score(PH i, Ca) −max( b̸=a score(PH i, Cb)) (1.2) This definition will prefer phrases or words that are highly associated with one cat-egory, whereas they are very loosely (if at all) associated with any of the other cate-gories. VIII.5.5.3 Evaluation of the Multiclass Bootstrapping The performance of Basilisk improved when using the conflict resolution with the mscore function. The improvement was more notable on the categories BUILDING, WEAPON, and LOCATION. When the same strategy was applied to the metaboot-strapping, the improvement was much more dramatic (up to 300% improvement in precision). In Figure VIII.16 we can see the precision of the Basilisk system on the various semantic categories after 800 entries were added to each of the lexicons. The recall for these categories was between 40 and 60 percent. VIII.5.6 Bootstrapping by Using Term Categorization Another method for the semiautomatic generation of thematic lexicons by means of term categorization is presented in Lavelli, Magnini, and Sebastiani (2002). They view the generation of such lexicons as an iterative process of learning previously VIII.6 Further Reading 175 Semantic Category Number of Correct Entries Precision Building 109 13.6% Event 266 26.6% Human 681 85.1% Location 509 63.6% Time 43 5.4% Weapon 88 11.0% Figure VIII.16. Precision of the multicategory bootstrapping system Basilisk. unknown associations between terms and themes. The process is iterative and gen-erates for each theme a sequence of lexicons that are bootstrapped from an initial lexicon. The terms that appear in the documents are represented as vectors in a space of documents and then are labeled with themes by using classic categorization techniques. Specifically, the authors used the AdaBoost algorithm. The intermediate lexicons generated by the AdaBoost algorithm are cleaned, and the process restarts by using the cleaned lexicon as the new positive set of terms. The authors used subsets of the Reuters RCVI Collection as the document corpus and some of WordNetDo-mains’s synsets as the semantic lexicons (split into training and test). The results for various sizes of corpora show that quite an impressive precision (around 75%) was obtained, and the recall was around 5–12 percent. Clearly, because there is no inherent connection between the corpus selected and the semantic lexicons, we can not expect a much higher recall. VIII.5.7 Summary The bootstrapping approach is very useful for building semantic lexicons for a variety of categories. The approach is suitable mostly for semiautomatic processes because the precision and recall we can obtain are far from perfect. Bootstrapping is beneficial as a tool to be used in tandem with other machine learning or rule-based approaches to information extraction. VIII.6 FURTHER READING Section VIII.1 More information on the use of HMM for text processing can be found in the follow-ing papers: Kupiec (1992); Leek (1997); Seymore, McCallum, and Rosenfeld (1999); McCallum, Freitag, and Pereira (2000); and Sigletos, Paliouras, and Karkaletsis (2002). Section VIII.2 Applications of MEMM for information extraction are described in the following papers: Borthwick (1999), Charniak (2000), and McCallum et al. (2000). Section VIII.3 Applications of CRF for text processing are described in Lafferty et al. (2001) and Sha and Pereira (2003). 176 Preprocessing Applications Using Probabilistic and Hybrid Approaches Section VIII.4 TEG is described in Rosenfeld et al. (2004). Section VIII.5 More details on bootstrapping for information extraction can be found in the fol-lowing papers: Riloff (1993a), Riloff (1996a), Riloff and Jones (1999), Lavelli et al. (2002), Phillips and Riloff (2002), and Thelen and Riloff (2002). IX Presentation-Layer Considerations for Browsing and Query Refinement Human-centered knowledge discovery places great emphasis on the presentation layer of systems used for data mining. All text mining systems built around a human-centric knowledge discovery paradigm must offer a user robust browsing capabilities as well as abilities to display dense and difficult-to-format patterns of textual data in ways that foster interactive exploration. A robust text mining system should offer a user control over the shaping of queries by making search parameterization available through both high-level, easy-to-use GUI-based controls and direct, low-level, and relatively unrestricted query language access. Moreover, text mining systems need to offer a user administrative tools to create, modify, and maintain concept hierarchies, concept clusters, and entity profile information. Text mining systems also rely, to an extraordinary degree, on advanced visualiza-tion tools. More on the full gamut of visualization approaches – from the relatively mundane to the highly exotic – relevant for text mining can be found in Chapter X. IX.1 BROWSING Browsing is a term open to broad interpretation. With respect to text mining sys-tems, however, it usually refers to the general front-end framework through which an enduser searches, queries, displays, and interacts with embedded or middle-tier knowledge-discovery algorithms. The software that implements this framework is called a browser. Beyond their ability to allow a user to (a) manipulate the various knowledge discovery algorithms they may operate and (b) explore the resulting patterns, most browsers also generally support functionality to link to some portion of the full text of documents underlying the patterns that these knowledge discovery algorithms may return. Usually, browsers in text mining operate as a user interface to specialized query languages that allow parameterized operation of different pattern search algorithms, though this functionality is now almost always commanded through a graphical user interface (GUI) in real-world text mining applications. This means that, practically, 177 178 Presentation-Layer Considerations for Browsing and Query Refinement Figure IX.1. Example of an interactive browser for distributions. (From Feldman, Fresko, Hirsh, et al. 1998.) many discovery operations are “kicked off” by a query for a particular type of pattern through a browser interface, which runs a query argument that executes a search algorithm. Answers are returned via a large number of possible display modalities in the GUI, ranging from simple lists and tables to navigable nodal trees to complex graphs generated by extremely sophisticated data visualization tools. Once a query is parameterized and run, browsers allow for the exploration of the potentially interesting or relevant patterns generated by search operations. On a basic level, the search algorithms of the core mining operations layer have to process search spaces of instances for a selected pattern type. This search, however, is structured in relation to certain specified search con-straints, and appropriate refinement strategies and pruning techniques are chosen. Such constraints and pruning approaches can be partly or fully specified through a browser interface, though the logic of such refinement techniques may, from a system architecture perspective, reside in as a separate set of services that may be invoked by both presentation-layer and search algorithm components. All patterns can be studied in the context of a conditioning concept set or context free (i.e., for the general domain of the whole collection). Conditioning a search task therefore means selecting a set of concepts that is used to restrict an analysis task (e.g., a restriction to documents dealing with USA and economic issues or IBM and hard drive components). For example, Figure IX.1 shows a simple distribution browser that allows a user to search for specific distributions while looking at a concept hierarchy to provide some order and context to the task. IX.1 Browsing 179 Many text mining systems provide a heterogeneous set of browsing tools cus-tomized to the specific needs of different types of “entities” addressed by the system. Most text mining systems increase the opportunities for user interactivity by offering the user the ability to browse, by means of visual tools, such entities as documents, concept distributions, frequent sets, associations, trends, clusters of documents, and so on. Moreover, it is not uncommon for text mining systems to offer multiple meth-ods for browsing the same entity type (e.g., graphs, lists, and hierarchical trees for documents; maps and hierarchical trees for concept names, etc.). Although all knowledge discovery operations are susceptible to overabundance problems with respect to patterns, it is typical for text mining systems, in particular, to generate immense numbers of patterns. For almost any document collection of more than a few thousand documents, huge numbers of concept distributions, relations between distributions, frequent concept sets, undirected relations between frequent concept sets, and association rules can be identified. Therefore, a fundamental requirement for any text mining system’s browsing interface is the ability to robustly support the querying of the vast implicit set of patterns available in a given document collection. Practically, however, text mining systems often cope best – and allow users to cope best – with the challenges of pattern overabundance by offering sophisticated refinement tools available while browsing that allow the shaping, constraining, pruning, and filtering of result-set data. Another extremely critical point in managing pattern overabundance is ensuring that the user of a text mining system has an adequate capability for inputting and manipulating what has been referred to as the measures of interestingness of patterns in the system. IX.1.1 Displaying and Browsing Distributions Traditional document retrieval systems allow a user to ask for all documents contain-ing certain concepts – UK and USA, for example – but then present the entire set of matching documents with little information about the collection’s internal structure other than perhaps sorting them by relevance score (which is a shallow measure com-puted from the frequency and position of concepts in the document) or chronological order. In contrast, browsing distributions in a text mining system can enable a user to investigate the contents of a document set by sorting it according to the child distribution of any node in a concept hierarchy such as topics, countries, companies, and so on. Once the documents are analyzed in this fashion and the distribution is displayed, a user could, for instance, access the specific documents of each subgroup (see Figure IX.2). One way to generate a distribution is to provide two Boolean expressions. The first expression could define the selection condition for the documents. The second expression would define the distribution to be computed on the set of chosen docu-ments. For instance, the user can specify as the selection criteria the expression “USA and UK” and only documents containing both concepts will be selected for further processing. The distribution expression can be “topics,” in which case, a set of rules that correlated between USA and UK and any of the concepts defined under the 180 Presentation-Layer Considerations for Browsing and Query Refinement Figure IX.2. Topic (concept) distribution browser from the KDT system selecting for USA and UK. (From Feldman, Dagan, and Hirsh 1998. Reprinted with permission of Springer Science and Business Media.) node “topics” in the taxonomy will be obtained. The results could be shown in a hierarchical way based on the structure of the taxonomy underneath “topics.” One can see, for instance, an association rule such as USA, UK ⇒acq 42/19.09%. This rule means that in 19.09 percent of the documents in which both USA and UK are mentioned, the topic acquisition is mentioned too, which amounts to 42 documents. The user could then click on that rule to obtain the list of 42 documents supporting this rule. A second association rule could be USA, UK ⇒currency 39/17.73%. In this example, let us assume that currency is an internal node and not a concept found in the documents. The meaning of the rule, therefore, is that, in 17.73 percent of the documents in which both USA and UK are mentioned, at least one of the topics underneath the node “currency” in the taxonomy is mentioned too, which amounts to 39 documents. The user could then expand that rule and get a list of more specialized rules, where the right-hand side (RHS) of each of them would be a child of the node “currency.” In this case, one would find UK and USA to be highly associated with money fx (foreign exchange), dlr (US Dollar), and yen. IX.1.2 Displaying and Exploring Associations Evenwhendatafromadocumentcollectionaremoderatelysized,association-finding methods will often generate substantial numbers of results. Therefore, association-discovery tools in text mining must assist a user in identifying the useful results out of all those the system generates. IX.1 Browsing 181 Figure IX.3. An example of an advanced tool for browsing and filtering associations. (From Feldman, Kloesgen, Ben-Yehuda, et al. 1997.) One method for doing this is to support association browsing by clustering asso-ciations with identical left-hand sides (LHSs). Then, these clusters can be displayed in decreasing order of the generality of their LHS. Associations that have more general LHSs will be listed before more specific associations. The top-level nodes in the hierarchical tree are sorted in decreasing order of the number of documents that support all associations in which they appear. Some text mining systems include fully featured, association-specific browsing tools (see Figures IX.3 and IX.4) geared toward providing users with an easy way for finding associations and then filtering and sorting them in different orders. This type of browser tool can support the specification of simple constraints on the presented associations. The user can select a set of concepts from the set of all possible concepts appearing in the associations and then choose the logical test to be performed on the associations. In even a simple version of this type of tool, the user can see either all associations containing either of these concepts (or), all of these concepts (and), or that the concepts of the association are included in the list of selected concepts (subset). He or she could then also select one of the internal nodes in the taxonomy, and the list of concepts under this node would be used in the filtering. For instance, if one set the support threshold at 10, and the confidence threshold at 10 percent, an overwhelming number of associations would result. Clearly, no user could digest this amount of information. An association browser tool, however, would allow the user to choose to view only those associations that might contain, for instance, both the concepts USA and acq (a shorthand concept label for “company acquisition”). This would allow him or her to see what countries are associated with USA with regard to acquisition along with all the statistical parameters related to each association. 182 Presentation-Layer Considerations for Browsing and Query Refinement Figure IX.4. An example of another GUI tool for displaying and browsing associations. (From Feldman, Kloesgen, Ben-Yehuda, et al. 1997.) The utility afforded by even relatively simple techniques, such as sorting, and browsers can provide a user with several sorting options for associations. Two options are rather obvious: sorting the associations in alphabetical order, and sorting the associations in decreased order of their confidence. A third ordering scheme is based onthechi-squarevalueoftheassociation.Inaway,thisapproachattemptstomeasure how different the probability of seeing the RHS of the association is given that one saw its LHS from the probability of seeing the RHS in the whole population. IX.1.3 Navigation and Exploration by Means of Concept Hierarchies Concept hierarchies and taxonomies can play many different roles in text mining sys-tems. However, it is important not to overlook the usefulness of various hierarchical representations in navigation and user exploration. Often, it is visually easier to traverse a comprehensive tree-structure of nodes relating to all the concepts relevant to an entire document collection or an individual pattern query result set than to scroll down a long, alphabetically sorted list of con-cept labels. Indeed, sometimes the knowledge inherent in the hierarchical structuring of concepts can serve as an aid to the interactive or free-form exploration of con-cept relationships, or both – a critical adjunct to uncovering hidden but interesting knowledge. A concept hierarchy or taxonomy can also enable the user of a text mining system to specify mining tasks concisely. For instance, when beginning the process of gener-ating association rules, the user, rather than looking for all possible rules, can specify interest only in the relationships of companies in the context of business alliances. IX.1 Browsing 183 Figure IX.5. A simple graphical interface for creating, exploring, and manipulating a Taxonomy. (From Feldman, Dagan, and Hirsh 1998. Reprinted with permission of Springer Science and Business Media.) To support this, the text mining system could display a concept hierarchy with two nodes marked “business alliances” and “companies,” for instance. The first node would contain terms related to business alliances such as “joint venture,” “strategic alliance,” “combined initiative,” and so on, whereas the second node would be the parent of all company names in the system (which could be the result of human effort specifying such a higher level term, but in many text mining systems a set of rules is employed with knowledge extracted from Internet-based or other commercial directories to generate company names). In this example, the user could perform a comprehensive search with a few clicks on two nodes of a hierarchical tree. The user would thus avoid the kind of desultory, arbitrary, and incomplete “hunting and pecking” that might occur if he or she had to manually input from memory – or even choose from a pick list – various relevant words relating to business alliances and companies from memory to create his or her query. A very simple graphical display of a concept hierarchy for browsing can be seen in Figure IX.5. In addition, concept hierarchies can be an important mechanism for support-ing the administration and maintenance of user-defined information in a document collection. For instance, entity profile maintenance and user-specified concept or document clustering can often be facilitated by means of the quick navigational opportunities afforded by tree-based hierarchical structures. IX.1.4 Concept Hierarchy and Taxonomy Editors Maintaining concept hierarchies and taxonomies is an important but difficult task for users of the text mining systems that leverage them. Therefore, presentation-layer 184 Presentation-Layer Considerations for Browsing and Query Refinement Figure IX.6. User interface for a taxonomy editor showing views of source and target taxonomy trees. (From Feldman, Fresko, Hirsh, et al. 1998.) toolsthatallowforeasierandmorecomprehensiveadministrationserveanimportant role in increasing the usability and effectiveness of the text mining process. Concept hierarchy editing tools build on many of the same features a user needs to employ a concept hierarchy as a navigational tool. The user must be able to search and locate specific concepts as well as hypernyms and hyponyms; fuzzy search capability is an important adjunct to allowing a user to scrub a hierarchy properly when making major category changes. An example of a graphical hierarchy editing tool appears in Figure IX.6. Moreover, an important feature in such an editor can be the ability to view the existing source concept hierarchy in read-only mode and to edit a target concept hierarchy at the same time. This can help a user avoid making time-consuming errors orcreatinginconsistencieswheneditingcomplextree-structuresormakingwholesale modifications. IX.1.5 Clustering Tools to Aid Data Exploration Although several methods for creating smaller subset-type selections of documents from a text mining system’s main document collection have already been discussed, there are numerous situations in which a user may want to organize groups of docu-ments into clusters according to more complex, arbitrary, or personal criteria. For instance, a user of a text mining system aimed at scientific papers on can-cer research may want to cluster papers according to the biomedical subdiscipline IX.2 Accessing Constraints and Simple Specification Filters at the Presentation Layer 185 Figure IX.7. Clustering associations using a category hierarchy. (From Feldman, Dagan, and Hirsh 1998. Reprinted with permission of Springer Science and Business Media.) (e.g., immunology, microbiology, virology, molecular biology, human genetics, etc.) of each paper’s lead author. Similarly, a user of a document collection composed of news feeds might want to leverage his or her text mining system’s concept hierarchy to cluster patterns involving individual countries under labels representing larger, intercountry groupings (see Figure IX.7). Clustering operations can involve both automatic and manual processes. Unlike classic taxonomies, groupings of clusters do not need to be strictly hierarchical in structure; individual text mining systems may adopt more or less flexible approaches to such groupings. For this reason, it is generally a requirement that a text mining system offer robust and easy interfaces for a user to view, scrub, and maintain cluster information. Moreover, because both document collections and users’ needs can change over time, it is especially important for text mining clustering capabilities to allow flexible reorientation of clusters as a system evolves and matures. Some text mining systems perform the majority of their manual or unsupervised clustering during preprocessing operations. In these cases, it is still often important to provide users with administrative capability to tweak clusters over the lifetime of a text mining application’s use. IX.2 ACCESSING CONSTRAINTS AND SIMPLE SPECIFICATION FILTERS AT THE PRESENTATION LAYER Given the immense number of prospective potential patterns that they might identify, text mining systems generally provide support for some level of user-specifiable constraints. These constraints can be employed to restrict the search to returning particular patterns, to limit the number of patterns presented, to offer options for specifying the interestingness of results, or to accomplish all of these objectives. 186 Presentation-Layer Considerations for Browsing and Query Refinement From a system architecture perspective, the logic of such constraints should be seen more as refinement techniques, and not so much as presentation-layer ele-ments. From a user perspective, however, such constraints and filters are invoked and modulated through the user interface. Therefore, constraint types can be dis-cussed in relation to other elements that can be employed to shape queries through a presentation-layer interface. Four common types of constraints are typical to text mining browser interfaces: ■Background Constraints refer to the knowledge of the domain that is given in the form of binary relations between concepts. For example, rules associating persons and countries can be constrained by the condition that an association between a person and a country excludes the nationality of that person. Back-ground constraints typically require a set of predicates to be created relating to certain types of concepts (e.g., entities) in the text mining system’s document collection. Binary predicates can allow one input argument and one output argu-ment. Such predicates are usually extracted from some expert or “gold standard” knowledge source. ■Syntactical Constraints generally relate to selections of concepts or keywords that will beincluded in aquery. Morespecifically, theycanreferto thecomponents of the patterns, for example, to the left- or right-hand side of a rule or the number of items in the components. ■Quality Constraints most often refer to support and confidence thresholds that can be adjusted by a user before performing a search. However, quality con-straints can also include more advanced, customized statistical measures to pro-vide qualities for patterns. An association rule, for instance, can be additionally specified by the significance of a statistical test, or a distribution of a concept group can be evaluated with respect to a reference distribution. These qualities are then used in constraints when searching for significant patterns. ■Redundancy Constraints have been described as metarules that determine when a pattern is suppressed by another pattern. For example, a redundancy rule could be used to suppress all association rules with a more special left-hand side than another association rule and a confidence score that is not higher than that of the other more general rule. Constraints are important elements in allowing a user to efficiently browse pat-terns that are potentially either incrementally or dramatically more relevant to his or her search requirements and exploration inclinations. Moreover, they can be essen-tial to ensuring the basic usability of text mining systems accessing medium or large document collections. IX.3 ACCESSING THE UNDERLYING QUERY LANGUAGE Although graphical interfaces make text mining search and browsing operations eas-ier to conduct for users, some search and browsing activities are facilitated if users have direct access to the text mining system’s underlying query language with well-defined semantics. Many advanced text mining systems, therefore – in addition to IX.4 Citations and Notes 187 Figure IX.8. Defining a distribution query through a simple GUI in the KDT system. (From Feldman, Kloesgen, Ben-Yehuda, et al. 1997.) offering pick lists of prespecified query types and common constraint parameters – support direct user access to a query command interpreter for explicit query compo-sition. Clearly, it is the query language itself that allows a user to search the vast implicit set of patterns available in a given document collection. However, the user envi-ronment for displaying, selecting, running, editing, and saving queries should not be given short shrift in the design of a text mining system. Figure IX.8 shows one exam-ple of a graphical query construction tool. Regardless of the specific combination of graphical and character-based elements employed, the easier it is for a user to specify his or her query – and understand exactly what that query is meant to return – the more usable and powerful a text mining system becomes. A more comprehensive discussion of text mining query languages can be found in Section II.3. IX.4 CITATIONS AND NOTES Section IX.1 Many of the ideas in Section IX.1 represent an expansion and updating of ideas introduced in Feldman, Kloesgen, Ben-Yehuda, et al. (1997). Methods for display of associations are treated partially in Feldman and Hirsh (1997). Navigation by concept hierarchies is treated in Feldman, Kloesgen, Ben-Yehuda, et al. (1997); and Feldman, Fresko, Hirsh, et al. (1998). Taxonomy editing tools are briefly discussed in Feldman, Fresko, Hirsh, et al. (1998). 188 Presentation-Layer Considerations for Browsing and Query Refinement Section IX.2 Presentation-level constraints useful in browsing are considered in Feldman, Kloesgen, and Zilberstein (1997a, 1997b). Section IX.3 Feldman, Kloesgen, and Zilberstein (1997b) discusses the value of providing users of text mining systems with multiple types of functionality to specify a query. X Visualization Approaches X.1 INTRODUCTION Human-centric text mining emphasizes the centrality of user interactivity to the knowledge discovery process. As a consequence, text mining systems need to pro-vide users with a range of tools for interacting with data. For a wide array of tasks, these tools often rely on very simple graphical approaches such as pick lists, drop-down boxes, and radio boxes that have become typical in many generic software applications to support query construction and the basic browsing of potentially interesting patterns. In large document collections, however, problems of pattern and feature over-abundance have led the designers of text mining systems to move toward the cre-ation of more sophisticated visualization approaches to facilitate user interactivity. Indeed, in document collections of even relatively modest size, tens of thousands of identified concepts and thousands of interesting associations can make browsing with simple visual mechanisms such as pick lists all but unworkable. More sophisticated visualization approaches incorporate graphical tools that rely on advances in many different areas of computer and behavioral science research to promote easier and more intensive and iterative exploration of patterns in textual data. Many of the more mundane activities that allow a user of a text mining system to engage in rudimentary data exploration are supported by a graphic user interface that serves as the type of basic viewer or browser discussed in Chapter IX. A typical basic browsing interface can be seen in Figure X.1. This type of basic browsing often combines a limited number of query-building functions with an already refined or constrained view of a subset of the textual data in the document collection. In addition, sometimes a basic browsing interface sup-plements its more character-oriented display elements by supporting the simplified execution of subroutines to draw static graphs of query results. Text mining visualization approaches, on the other hand, generally emphasize a set of purposes different from those that underpin basic browsing interfaces. Although both basic browsers and visualization tools aim at making interaction with data possible, visualization tools typically result in more sophisticated graphical 189 190 Visualization Approaches Figure X.1. Basic category browsing in a text mining system. (From Feldman, Kloesgen, Ben-Yehuda, et al. 1997.) interfaces that attempt to stimulate and exploit the visual capacities of users to iden-tify patterns. For instance, an interactive circle graph – a common visualization tool in text min-ing systems – might be tailored specifically to allow cancer researchers to explore an entire corpus of medical research literature broadly in a single graph (see background graph in Figure X.2). By having concepts extracted from the literature represented as nodes on the periphery of the circle and associations between concepts identified by linking lines of various thicknesses that bisect the circle, a researcher could very quickly navigate high-level concepts and then zero-in on relationships emanating from more granular concepts – all while gaining at least a generalized sense of the totality of relationships within the corpus. breast cancer carcinoma Hypoxia Disease HIV-1 cancer retinoblastoma leukemia Gene TUMOUR NECROSIS F... opidermal growth... Nef p53 Leptin cita RT optical NCPT BRCA1 Bcl-2 CALCITONIN NCP CPP APC DP120 PURT PUN PUL poc obesity Parkinson's disease optic cencer Hydroclorine120 Hydrogene p53 leukemia glioblastoma LUNG CANCER melanoma skin cancer glioma colon cancer osteosarcoma breast cancer Hypoxia carcinoma cancer retinoblastoma Figure X.2. Circle graph–based category connection map of medical literature relating to AIDS with inset of graphically driven refinement filter. (From Feldman, Regev, et al. 2003.) X.1 Introduction 191 This type of visualization tool enables a researcher to appraise, handle, and nav-igate large amounts of data quickly and with relative ease. Moreover, “control ele-ments” – such as refinement filters or other constraint controls – can be embedded into the overall operations of the visualization interface being executed by as little as a mouse click on a highlighted concept label (see inset in Figure X.2). Certainly, some kinds of refinement constraints lend themselves to being set quite adequately by character-driven menus or pull-down boxes. By facilitating context-sensitive and graphical refinement of query results, however, more sophisticated visual presentation tools can add to the speed and intuitive ease with which a user can shape knowledge-discovery activities. Critical advantages that individual visualization approaches can have over character-oriented browsing formats in presenting patterns in data include the following: ■Concision: the capability of showing large amounts of different types of data all at once; ■Relativity and Proximity: the ability to easily show clusters, relative sizes of groupings, similarity and dissimilarity of groupings, and outliers among the data in query results; ■Focus with Context: the ability to interact with some highlighted feature while also being able to see the highlighted feature situated in some of its relational context; ■Zoomability: the ability to move from micro to macro quickly and easily in one big step or in increments; ■“Right Brain” Stimulation: the ability to invite user interaction with textual data that is driven not only by premeditated and deliberate search intentions but also as a result of intuitive, reactive, or spatially oriented cognitive processes for identifying interesting patterns. On the other hand, adding an overabundance of complex graphical features to a visualization interface does not necessarily make the interface more appropriate to its search tasks. Overly complex visualization tools can overdetermine or even inhibit exploration of textual data – particularly if designers of text mining systems lose sight of the main advantages that graphic presentation elements have over more prosaic form- or table-based browser formats. The evolution from simple, primarily character-based browsers to more powerful and more specialized visualization interfaces has helped transform the orientation of text mining systems. Text mining systems have moved from a focus on the pre-meditated search for suspected patterns to a broader capability that also includes more free-form and unguided exploration of textual data for implicit, obscure, and unsuspected patterns. X.1.1 Citations and Notes A seminal discussion of human-centered knowledge discovery can be found in Brachman and Anand (1996). Grinstein (1996) also offers a relevant treatment of related topics. 192 Visualization Approaches Pre-processing Tasks Categorization, Feature/Term Extraction Processed Document Collection (categorized, concept-labeled, time-stamped) Text Documents Core Mining Operations Pattern Discovery, Trend Analysis User Presentation Browsing, Visualization Figure X.3. High-level functional architecture of a text mining system showing position of visualization. General overviews of information visualization can be found in Tufte (1983), Tufte (1990), Cleveland (1994), Shneiderman (1997), and Spence (2001). Useful works on information visualization techniques in information retrieval and the visual presentation of query results include Rao et al. (1992), Spoerri (1999), Ahlberg and Schneiderman (1994), Masui et al. (1995), Hearst (1999), Lagus (2000b), and Hearst (2003). Important early treatments of information navigation and exploration approaches include Goldstein and Roth (1994), Ahlberg and Wistrand (1995), and Jerding and Stasko (1995). There really is not yet a comprehensive treatment of visualization techniques specific to text mining. However, several works – including Feldman, Kloesgen, Ben-Yehuda, et al. (1997); Feldman, Kloesgen, and Zilberstein (1997a); Landau et al. (1998); Aumann et al. (1999); Lagus et al. (1999); Wong, Whitney, and Thomas (1999); Lagus (2000a); and Wong et al. (2000) – provide relevant discussions of a limited number of specific visual techniques and their application to text mining activities. X.2 ARCHITECTURAL CONSIDERATIONS In the high-level functional architecture of a text mining system illustrated in Figure X.3, visualization tools are among those system elements situated closest to the user. Visualization tools are mechanisms that serve to facilitate human interactiv-ity with a text mining system. These tools are layered on top of – and are dependent upon – the existence of a processed document collection and the various algorithms that make up a text mining system’s core mining capabilities. The increased emphasis on adding more sophisticated and varied visualization tools to text mining systems has had several implications for these systems’ archi-tectural design. Although older text mining systems often had rigidly integrated visualization tools built into their user interface (UI) front ends, newer text mining systems emphasize modularity and abstraction between their front-end (i.e., pre-sentation layer) and middle-tier (i.e., core discovery and query execution elements) architectures. X.2 Architectural Considerations 193 Preprocessing Tasks Categorization, Feature/Term Extraction Processed Document Collection (categorized, keyword-labeled, time-stamped) Text Mining Discovery Algorithms Pattern Identification, Trend Analysis Browsing Functionality Simple Filters, Query Interpreter, Search Interpreter, GUI News and Email WWW & FTP Resources Other Online Resources Document Fetching/ Crawling Techniques User Prepared/Compressed Intermediate Representation Refinement Techniques Suppression, Ordering, Pruning, Generalization, Clustering Parsing Routines Knowledge Sources Background Knowledge Concept Identifiers Background Knowledge Base Visualization, Graphing Figure X.4. Situating visualization within text mining system architecture. Indeed, there are several good reasons for architects of text mining systems to abstract the front and middle tiers of their software platforms. First, visualization tools and knowledge discovery algorithms tend to be modified and upgraded on an ever more iterative basis. A “decoupled” or “loosely coupled” front end and middle tier in text mining system – abstracted from each other by an intermediary connection layer based on a formal and well-defined software interface – allow much better for such unsynchronized development of different elements of the text mining system. Figure X.4 illustrates the general position of visualization components in a text mining system’s architecture. Second, text mining systems are moving from having a few limited visualization and graphing tools to supporting whole suites of different kinds of presentation layer utilities. This is both a reflection of the movement toward facilitating greater user interactivity through more customized (even personalized) UIs and the proliferation of more mature, sophisticated, and specialized visualization tools. With many more types of different visualization approaches now available, architects of text mining systems are probably well advised to keep their options open; instead of scrapping a whole text mining system when its UI has become hopelessly outdated, develop-ers can leverage a more loosely coupled front end and middle-tier architecture to continue to add additional visualization components. 194 Visualization Approaches Finally, from a practical perspective, the wider availability of RDF and XML-oriented protocols makes such loose coupling of front ends and middle tiers much more feasible. This fact is underscored by the current availability of a whole spate of specialized and very powerful commercial off-the-shelf visualization software with defined interfaces or feed formats that support various RDF or XML data inter-change approaches. Visualization tools have increasingly played a crucial, even transformative role in current state-of-the-art text mining systems. As with data mining systems, sophis-ticated visualization tools have become more critical components of text mining applications because of their utility in facilitating the exploration for hidden and subtle patterns in data. X.2.1 Citations and Notes For obvious reasons, the architectural discussion in this section is highly generalized. The architectural descriptions have been informed by the visualization elements found in the KDT (Feldman and Dagan 1995), Explora (Kloesgen 1995b), Document Explorer (Feldman, Kloesgen, and Zilberstein 1997a), and TextVis (Landau et al. 1998) knowledge discovery systems. X.3 COMMON VISUALIZATION APPROACHES FOR TEXT MINING X.3.1 Overview A substantial and mature literature already exists relating to the use of visualiza-tion tools in a wide range of generic and specific computer science applications. The aim of the next few sections is to illustrate how a select number of commonly seen visualization approaches have been put to good use supplementing text mining functionality. The potential number of combinations of visual techniques that can be applied to problems in unstructured data is probably limitless. With such a wide array of possible visual techniques, coupled with the subjective nature of assessing the efficacy of visualization approaches across different types of knowledge-discovery problem sets and user groups, it would be problematic to attempt to rate, to any precise degree, the success of a specific visual approach or set of approaches. Nevertheless, several visual approaches have suggested themselves more informally as useful enhancements to knowledge discovery operations involving textual data. These include simple concept graphs, histograms, line graphs, circle graphs, self-organizing maps, and so-called context + focus approaches – like the hyperbolic tree – as well as various derivative and hybrid forms of these main approaches. Thus, perhaps it should be stated clearly that the intention here is not so much to be prescriptive – detailing the circumstances when a particular visualization approach is decidedly more appropriate, more powerful, or more effective than another for a given task – as descriptive, or describing how a particular tool has typically been employed to supplement text mining systems. X.3 Common Visualization Approaches for Text Mining 195 X.3.2 Simple Concept Graphs Even rather bare-bones visualization tools such as simple concept graphs provide an efficient exploration tool for getting familiar with a document collection. The two main benefits of these types of visualizations are their abilities to organize the exploration of textual data and to facilitate interactivity – that is, the user can click on each node or edge and get the documents supporting them or can initiate various other operations on the graphs. There is a relationship between these two benefits as well: offering user-friendly organization approaches can do much to promote increased user interactivity with textual data. This latter type of exploration can be further supported by linking several graphs. Thus, the relevance of selected aspects of one graph can be efficiently studied in the context of another graph. Simple concept graphs have been used, with many variations, in several real-world text mining systems. Simple Concept Set Graphs One of the most basic and universally useful visualization tools in text mining is the simple “root and branches” hierarchical tree structure. Figure X.5 shows a classic visualization for a concept taxonomy in a document collection. The root and leaf vertices (nodes) of such a visualization are concept identifiers (i.e., name labels for concepts). The special layout of the presentation elements allows a user to traverse the hierarchical relationships in the taxonomy easily either to identify sought-after concepts or to search more loosely for unexpected concepts that appear linked to other interesting concepts in the hierarchy. Figure X.5. Interactive graph used to illustrate a concept taxonomy as a hierarchical tree structure. (From Feldman, Dagan, and Hirsh 1998. Reprinted with permission of Springer Science and Business Media.) 196 Visualization Approaches This kind of visualization tool can also easily be made to allow a user to click on a node concept and either move to the underlying documents containing the concept or to connect to information about sets or distributions of documents containing the concept within the document collection. This latter type of information – the answer set to a rather routine type of query in many text mining systems – can be demonstrated by means of a concept set graph. Formally, a concept set graph refers to a visual display of a subset of concept sets with respect to their partial ordering. Perhaps the most common and straightforward way to display concept sets graphically is also by means of a simple hierarchical tree structure. Figure X.5 shows a set graph for frequent sets arranged in a tree structure. The user can operate on this graph by selecting nodes, opening and closing nodes, or defining new search tasks with respect to these nodes, for instance, to expand the tree. The first level in Figure X.5 relates to country concepts sorted by a simple quality measure (support of the frequent set). The node “USA” (support: 12,814 documents) is expanded by person concepts. Further expansions relate to economical topic con-cepts (e.g., expansion of the node “James Baker”: 124 documents, 0%) and country concepts. Of course, a hybrid form could be made between the “root and branches”–type visual display format shown in Figure X.5 and the simple concept set graph illustrated in Figure X.6. For some applications, having the percentage support displayed within a concept node on the root and branches visualization might prove more useful to Figure X.6. A hierarchical concept set graph. (From Feldman, Kloesgen, and Zilberstein 1997b.) X.3 Common Visualization Approaches for Text Mining 197 navigation and exploration than the “long indented list” appearance of the vertical tree structure in Figure X.6. This form would be a directed graph, the edges of which (usually depicted with directional arrow heads) indicate the hierarchical relationship between nodes at the graph vertices. Although a hierarchical concept graph may represent a very basic approach to visualizing sets of concepts, it can be also used as the entry point for jumping to more complex visualizations or graphically driven refinement controls. For instance, by clicking on a concept identifier, a user may be able to navigate to another graph that shows associations containing the highlighted concept, or a graphic box could be triggered by clicking on a concept allowing the user to adjust the quality measure that drove the original query. Another related, commonly used visualization approach applicable to simple concept sets is the organization of set members into a DAG. Formally, a DAG can be described as a directed graph that has no path and that begins and ends at the same vertex. Practically, it might be viewed as a hierarchical form in which child nodes can have more than one parent node. DAGs can be useful in describing more complex containership relations than those represented by a strict hierarchical form, in that the DAG represents a gen-eralization of a tree structure in which a given subtree can be shared by different parts of the tree. For instance, a DAG is often used to illustrate the somewhat more complex relations between concepts in an ontology that models a real-world rela-tionship set in which “higher level” concepts often have multiple, common directed relationships with “lower level” concepts in the graph. Described in a different way, DAGs permit lower level containers to be “contained” within more than one higher level container at the same time. A very simple DAG is shown in Figure X.7. Traditional, rigidly directed hierar-chical representations might be both much less obvious and less efficient in showing that four separate concepts have a similar or possibly analogous relationship to a fifth concept (e.g., the relationship that the concepts motor vehicles, cars, trucks, and power tillers have with the concept engines). Because of their ability to illustrate more complex relationships, DAGs are very frequently leveraged as the basis for moderately sophisticated relationship maps in text mining applications. A more complex and well-known application of a DAG to an ontology can be seen in Zhou and Cui’s (Zhou and Cui 2004) visual representations of the Gene Ontology (GO) database. Zhou and Cui created a DAG to visually model a small subset of the GO ontology, focusing only on the root node and three child nodes. When using a DAG to show biological function for 23 query genes, the DAG still ended up having 10 levels and more than 101 nodes. Motor Vehicles Power Tillers Trucks Cars Engines Figure X.7. A simple DAG modeling a taxonomy that includes multiple parent concepts for a single-child concept. 198 Visualization Approaches Start Finish Figure X.8. Visualization of a generic DAG-based activity network. Visualization techniques based on DAGs have proven very useful for visually modeling complex set-oriented or container-type relationships, such as those found among concepts in the GO ontology, in a relatively straightforward and understand-able way. However, Zhou and Cui find DAGs to be limited when illustrating more granular or functional relationship information (such as one finds when exploring concept associations) or when the number of concepts in play becomes large. In these cases, other types of visualization techniques, such as circular graphs or net-work models, can sometimes provide greater expressive capabilities. Beyond their use in modeling concept hierarchies, DAGs can also be employed as the basis for modeling activity networks. An activity network is a visual structure in which each vertex represents a task to be completed or a choice to be made and the directed edges refer to subsequent tasks or choices. See Figure X.8. Such networks provide the foundation for more advanced types of text mining knowledge discovery operations. DAG-based activity networks, for instance, form the basis for some of the more popular types of visualizations used in critical path analysis – often an important approach in knowledge-discovery operations aimed at link detection. Simple Concept Association Graphs Simple concept association graphs focus on representing associations. A simple association graph consists of singleton vertex and multivertex graphs in which the edges can connect to a set of several concepts. Typically, a simple association graph connects concepts of a selected category. At each vertex of a simple association graph, there is only one concept. Two concepts are connected by an edge if their similarity with respect to a similarity function is larger than a given threshold. A simple concept association graph can be undirected or directed, although undi-rected graphs are probably more typical. For example, one might use an undirected graph to model associations visually between generic concepts in a document col-lection generated from corporate finance documentation. On the other hand, if one X.3 Common Visualization Approaches for Text Mining 199 Microsoft Google Yahoo IBM Sun Convera Overture Autonomy Verity Lycos Findwhat 3 4 6 32 11 36 17 37 11 7 7 24 21 6 9 MSN 29 Figure X.9. Concept association graph: single vertex, single category (software companies in the context of search engine software). were seeking to produce a tool to visualize associations between proteins in a cor-pus of proteomics research literature, one might want to employ a directed graph with directed edges (as denoted by directional arrowheads) between concept nodes. This type of directed graph would be useful in visually indicating not just general association relationships but also the patterns of one protein’s acting upon another. Figure X.9 shows a concept association graph for the company category in the context of search engine software and a simple similarity function based on the num-ber of documents in which the companies co-occur. The figure allows a user to quickly infer conclusions about data that might be possible only after a much more careful investigation if that user were forced to make his or her way through large lists or tables of textual and statistical data. Such inferences might include the following: ■Microsft, Google, and IBM are the most connected companies; ■Lycos and Findwhat are the only members of a separate component of the graph; ■MSN is connected only to Microsoft, and so on. Another type of simple concept association graph can present the associations between different categories such as companies and software topics. The singleton vertex version of this graph is arranged like a map on which different positions of circles are used to include the concepts of categories, and edges (between companies and software topics) present the associations. Often, these singleton vertex graphs 200 Visualization Approaches 87 97 78 2 3 1 38 11 3 23 Microsoft Google Yahoo IBM Sun Verity Search Database Office Automation OEM Software 191 1 Figure X.10. Concept association graph: single vertex, several categories. are designed as bipartite graphs displaying two categories of concepts by splitting one category to the top of the graph and another category to the bottom with edges representing connections linking individual pairs of vertices. Figure X.10 shows an example of this kind of concept association graph. Similarity Functions for Simple Concept Association Graphs Similarity functions often form an essential part of working with simple concept asso-ciation graphs, allowing a user to view relations between concepts according to differ-ing weighting measures. Association rules involving sets (or concepts) A and B that have been described in detail in Chapter II are often introduced into a graph format in an undirected way and specified by a support and a confidence threshold. A fixed confidencethresholdisoftennotveryreasonablebecauseitisindependentofthesup-port from the RHS of the rule. As a result, an association should have a significantly higher confidence than the share of the RHS in the whole context to be considered as interesting. Significance is measured by a statistical test (e.g., t-test or chi-square). With this addition, the relation given by an association rule is undirected. An asso-ciation between two sets A and B in the direction A ⇒Bimplies also the association B ⇒A. This equivalence can be explained by the fact that the construct of a statisti-cally significant association is different from implication (which might be suggested by the notation A ⇒B). It can easily be derived that if B is overproportionally represented in A, then A is also overproportionally represented in B. As an example of differences of similarity functions, one can compare the undi-rected connection graphs given by statistically significant association rules with the graphs based on the cosine function. The latter relies on the cosine of two vectors and is efficiently applied for continuous, ordinal, and also binary attributes. In case of documents and concept sets, a binary vector is associated to a concept set with the vector elements corresponding to documents. An element holds the value 1 if all the concepts of the set appear in the document. Table X.1 (Feldman, Kloesgen, and Zilberstein 1997b), which offers a quick summary of some common similarity functions, shows that the cosine similarity function in this binary case reduces to the fraction built by the support of the union of the two concept sets and the geometrical mean of the support of the two sets. A connection between two sets of concepts is related to a threshold for the cosine similarity (e.g., 10%). This means that the two concept sets are connected if the support of the document subset that holds all the concepts of both sets is larger than 10 percent of the geometrical mean of the support values of the two concept sets. X.3 Common Visualization Approaches for Text Mining 201 Table X.1. Some Commonly Used Similarity Functions for Two Concept Sets A, B (a = support(A), b = support(B), d = support(A,B)) Function Similarity Characteristic Support threshold d > d0 (step function) evaluates only d, independent from a −d, b −d Cosine s = d/ √ a · b Low weight of a −d, b −d Arithmetical mean s = 2d/(a + b) middle point between cosine and Tanimoto Tanimoto s = d/(a + b −d) high weight of a −d, b −d Information measure weighted documents only applicable if weights are reasonable Statistical test threshold statist. quality typically for larger samples and covers The threshold holds a property of monotony: If it is increased, some connections existing for a lower threshold disappear, but no new connections are established. This property is used as one technique to tune the complexity of a simple concept graph. One can derive a significance measure (factor f ) for this situation in which tuning is required in the following way. Let f be the following factor: f = Ns(A, B)/s(A)s(B). Given the support s for the two concept sets A resp. B and N the number of documents in the collection (or a subcollection given by a selected context), we can calculate the factor. In the case of the independence of the two concept sets, f would be expected around the value 1. Thus, f is larger than 1 for a statistically significant association rule. The cosine similarity of concept sets A and B can now be calculated as S(A, B) = f ·  q(A) · q(B); that is, as the geometrical mean of the relative supports of A and B (q(A) = s(A)/N) multiplied by the factor f, thus combining a measure for the relative support of the two sets (geometrical mean) with a significance measure (factor f ). The cosine similarity therefore favors connections between concept sets with a large support (which need not necessarily hold a significant overlapping) and includes connections for concept sets with a small support only if the rule significance is high. This means that the user should select the cosine similarity option if there is a preference for connections between concept sets with a larger support. On the other side, the statistically based association rule connection should be preferred if the degree of coincidence of the concepts has a higher weight for the anal-ysis. Similar criteria for selecting an appropriate similarity function from Table X.1 can be derived for the other options. Equivalence Classes, Partial Orderings, Redundancy Filters Very many pairs of subsets can be built from a given category of concepts, (e.g., all pairs of country subsets for the set of all countries). Each of these pairs is a possi-ble association between subsets of concepts. Even if the threshold of the similarity 202 Visualization Approaches function is increased, the resulting graph can have too complex a structure. We now define several equivalence relations to build equivalence classes of associations. Only a representative association from each class will then be included in the keyword graph in the default case. A first equivalence is called cover equivalence. Two associations are cover-equivalent iff they have the same cover. For example (Iran, Iraq) => (Kuwait, USA) is equivalent to (Iran, Iraq, Kuwait) => USA because they both have the same cover (Iran, Iraq, Kuwait, USA). The association with the highest similarity is selected as the representative from a cover equivalence class. Context equivalence is a next equivalence relation. Two associations are context-equivalent iff they are identical up to a different context. That means that two asso-ciations are identical when those concepts that appear on both sides are eliminated from each association. For example, (Iran, Iraq) => (Iran, USA) is equivalent to (Kuwait, Iraq) => (Kuwait, USA). The first association establishes a connection between Iraq and USA in the context of Iran, whereas the second association is related to the context of Kuwait. The context-free associations are selected as the representatives from this equivalence class (e.g., Iraq => USA). The next definition relates to a partial ordering of associations, not an equivalence relation. An association A1 is stronger than an association A2 if the cover of A1 is a subset of the cover of A2. As special cases of this ordering, the right- and left-hand sides are treated separately. Selecting the representative of an equivalence class or the strongest associa-tions can be applied as a basic redundancy filter. Additionally, criteria can refine these filters (for instance, for the context-equivalence, a context-conditioned asso-ciation can be selected in addition to the context-free association iff the similarity of the context-conditioned association is much higher with respect to a significance criterion). There is a duality between frequent sets of concepts and associations. For a given set of frequent concepts, the implied set of all associations between frequent concepts of the set can be introduced. On the other hand, for a given set of associations, the set of all frequent concepts appearing as left- or right-hand sides in the associations can be implied. In the application area of document collections, users are mainly interested in frequent concept sets when concentrating on basic retrieval or browsing. These fre-quent concepts are considered as retrieval queries that are discovered by the system to be interesting. When attempting to gain some knowledge of the domain represented by a docu-ment collection, users are often drawn to interacting with association rules, shaping the various measures and refinement filters to explore the nature of the concept relations in the domain. In the simple concept graphs, the concept sets are there-fore included as active nodes (activating a query to the collection when selected by the user). Complementary and intersection sets (e.g., related to the cover of an association) can also appear as active nodes. Typical Interactive Operations Using Simple Concept Graphs One of the key drivers for employing visualization tools is to promote end user interactivity. Concept graphs derive much of their value from facilitating interactive X.3 Common Visualization Approaches for Text Mining 203 operations. A user can initiate these operations by manipulating elements in the graphs that execute certain types of system activities. Some interactive operations relating the concept graphs have already been dis-cussed – or at least suggested – in the previous sections. However, a more systematic review of several types of these operations provides useful insights into kinds of sec-ondary functionality that can be supplemented by simple concept graph visualization approaches. Browsing-Support Operations Browsing-support operations enable access to the underlying document collections from the concept set visual interface. Essentially, a concept set corresponds to a query that can be forwarded to the collection retrieving those documents (or their titles as a first summary information), which include all the concepts of the set. Therefore, each concept set appearing in a graph can be activated for brows-ing purposes. Moreover, derived sets based on set operations (e.g., difference and intersection) can be activated for retrieval. Search Operations Search operations define new search tasks related to nodes or associations selected in the graph. A graph presents the results of a (former) search task and thus puts together sets of concepts or sets of associations. In a GUI, the user can specify the search constraints: syntactical, background, quality, and redundancy constraints. The former search is now to be refined by a selection of reference sets or asso-ciations in the result graph. Some of the search constraints may be modified in the GUI for the scheduled refined search. In refinement operations, the user can, for example, increase the number of elements that are allowed in a concept set. For instance, selected concept sets in Figure X.6 or selected associations in Figure X.9 can be expanded by modifying restrictions on the maximum number of elements in concept sets. Link Operations Link operations combine several concept graphs. Elements in one graph are selected and corresponding elements are highlighted in the second graph. Three types of linked graphs can be distinguished: links between set graphs, between association graphs, and between set and association graphs. When linking two set graphs, one or several sets are selected in one graph and corresponding sets are highlighted in the second graph. A correspondence for sets can rely, for instance, on the intersections of a selected set with the sets in the other graph. Then all those sets that have a high overlap with a selected set in the first graph are highlighted in the second graph. When selected elements in a set graph are linked with an association graph, associations in the second graph that have a high overlap with a selected set are highlighted. For instance, in a company graph, all country nodes that have a high intersection with a selected topic in an economical topic graph can be highlighted. Thus, linkage of graphs relies on the construct of a correspondence between two set or association patterns. For example, a correspondence between two sets can be defined by a criterion referring to their intersection, a correspondence between a set 204 Visualization Approaches and an association by a specialization condition for the more special association constructed by adding the set to the original association, and a correspondence between two associations by a specialization condition for an association constructed by combining the two associations. Presentation Operations A first interaction class relates to diverse presentation options for the graphs. It includes a number of operations essential to the customization, personalization, cali-bration, and administration of presentation-layer elements, including ■sorting (e.g., different aspects of quality measures) ■expanding or collapsing ■filtering or finding ■zooming or unzooming nodes, edges, or graph regions. Although all these presentation-layer operations can have important effects on facil-itating usability – and as a result – increased user interaction, some can in certain situations have a very substantial impact on the overall power of systems visualization tools. Zoom operations, in particular, can add significant capabilities to otherwise very simple concept graphs. For instance, by allowing a user to zoom automatically to a predetermined focal point in a graph (e.g., some concept set or association that falls within a particular refinement constraint), one can add at least something rem-iniscent of the type of functionality found in much more sophisticated fisheye and self-ordering map (SOM) visualizations. Drawbacks of Simple Concept Graphs A few disadvantages of simple concept graphs should be mentioned. First, the func-tionality and usability of simple concept graphs often become more awkward and limited with high levels of dimensionality in the data driving the models. Hierarchies with vast numbers of nodes and overabundant multiple-parent-noded relationships can be difficult to render graphically in a form that is legible – let alone actionable – by the user of a text mining system; moreover, undirected concept association graphs can become just as intractable for use with large node counts. In addition, although the streamlined nature of simple concept graphs has its benefits, there are some built-in limitations to interactivity in the very structure of the graph formats. Clearly, tree diagrams can have “hotspots” that allow linking to other graphs, and the type of linked circle node graphs that constitute most simple concept-association graphs can be made to support pruning and other types of refinement operations. However, both forms of simple concept graphs are still relatively rigid formats that are more useful in smaller plots with limited numbers of nodes that can be visually traversed to understand patterns. As a result, simple concept graphs are far less flexible in supporting the explo-ration of complex relationships than some other types of visualization approaches. Indeed, other approaches, such as some three-dimensional paradigms and visual methodologies involving greater emphasis of context + focus functionality have been X.3 Common Visualization Approaches for Text Mining 205 Figure X.11. Early text mining visualization implementation based on a histogram (topic dis-tribution graph from the KDT system ca. 1998). (From Feldman, Kloesgen, and Zilberstein 1997b.) specifically designed to offer greater flexibility in navigation and exploration of query results than have data with more abundant or complex patterns. X.3.3 Histograms In addition to basic “bread and butter” approaches like simple concept graphs, early text mining systems often relied on classic graphing formats such as histograms, or bar charts, to provide visualization capabilities. Although architects of text mining systems have shown an increasing willingness to integrate more exotic and complex interactive graphic tools into their systems, histograms still have their uses in explor-ing patterns in textual data. A very simple histogramatic representation can be seen in Figure X.11. Histograms still have their uses in text mining systems today and seem particularly well-suited to the display of query results relating to distributions and proportions. However, although two-dimensional (2-D) bar charts themselves have changed little over the last several years, the overall presentation framework in which these bar charts are displayed has become substantially more refined. Histogrammatic representations are often situated in GUIs with split screens, whichalsosimultaneouslydisplaycorrespondinglistsortablesofconceptdistribution and proportion information of the type described in Chapter II. Histograms are useful in presentation of data related to distributions and proportions because they allow easy comparison of different individual concepts or sets across a wider range of other concepts or sets found within a document collection or subcollection. See Figure X.12. This is not, however, to say that histograms are only useful in displaying results for distribution- or proportion-type queries; for instance, associations can Initial Query Donald Rumsfeld: 145 Colin Powell: 135 Pervez Musharraf: 116 John Ashcroft: 85 Tony Blair: 84 Ari Flescher: 64 Mullah Mohammad Omar: 63 George W. Bush: 55 George Bush: 29 Vladimir Putin: 50 Mohamed Atta: 44 Dick Cheney: 44 Yasser Arafat: 41 Tom Daschle: 33 Bill Clinton: 33 Ariel Sharon: 30 Donald H. Rumsfeld: 30 John Diamond: 27 Abdul Salam Zaeef: 26 Taliban Al Qaeda CIA FBI State Department Northern Alliance Treasury Department HAMAS U.N. Monterey Institute U.S. State Department Palestinian Authority OPEC University of Houston National Security Agency Justice Department IRS Armed Islamic Group Abu Sayyaf World Trade Organization Saddam Hussain: 26 Robert Mueller: 25 Richard Myers: 25 Tom Ridge: 25 Jim Landers: 25 Gregg Jones: 25 Richard Boucher: 24 Tom Hundley: 23 Rudolph Gluliani: 22 G.Robert Hillman: 22 Jacques Chirac: 21 Diego Garcia: 21 Tod Robberson: 21 Paul Wolfowitz: 20 Bob Kemper: 20 Gloria Arroyo: 20 Bob Graham: 19 David Jackson: 19 Richard Whittle: 19 Shimon Peres: 18 Gregory Katz: 18 George Tenet:17 John Stufflebeem: 17 Anwar Sadat: 17 Ramzi Yousef: 17 Warren P . Strobel: 17 Mohammad Zahir Shah: 17 Mohammad Zahir Shah: 15 Ahmed Ressam: 16 Hosni Mubarak: 16 Michael Kilian: 15 Paul O'Neill: 21 Osama bin Laden: 1033 10 20 30 40 50 60 70 80 90 100 110 120 130 140 150 160 170 180 190 200 210 220 230 240 260 250 270 280 290 300 310 320 330 0 Figure X.12. GUI’s left pane shows results of concept distribution query in list form and right pane with histogrammatic representation of this same distribution. 206 X.3 Common Visualization Approaches for Text Mining 207 also be compared according to various measures and plotted in stacks. It is true, however, that histograms are more commonly employed to display distribution- and proportion-related results. Users can thus very easily sort or traverse these lists of concepts (typically along with details pertaining to some measure of quality) while quickly scanning the bar charts for visual cues that suggest relative comparisons of distribution or proportion information, outliers, concepts closer to the average distribution, and so on. Indeed, histograms now tend to be more interactive inasmuch as users are able to manipulate refinement constraints. The incorporation of pop-ups or separate win-dows allow filters to be adjusted by means of sliders, dials, buttons, or pull-down boxes to give users a more “real-time” feel for how constraint changes affect query results. Radical fluctuations in the height of individual bars on a chart are visually much more immediately noticeable to a user than the changes in numerical values in long lists or table grids. Still, smaller differences between bars are harder to discern. Thus, histograms can sometimes be more useful for examining outliers and baseline values among charted items. They are less useful for helping a user visually distinguish between more minute differences in the relative values of individually graphed items. X.3.4 Line Graphs Like histograms, line graphs may not at first seem to be the most advanced of visual-ization approaches for text mining applications. However, these graphs have many advantages. Many academic and commercial text mining systems have at one time or other employed line graphs to support knowledge discovery operations. Line graphs represent what might be described as “cheap and cheerful” visu-alization solutions for text mining. They are “cheap” because they combine the virtues of relatively low system overhead and development expense in that there are many widely available free or low-cost line graphing software libraries that can be leveraged to create specific competent presentation elements. They are “cheerful” because many of these mature, prebuilt libraries have been specifically developed to be embedded into a wide range of software applications. As a result, integration and customization of the libraries are often relatively straightforward. These advantages make line graphs a good choice for developing uncomplicated graphs during the prototyping and early-release stages of text mining systems. The ease of implementing such graphs is helpful because it permits very quick feedback to system developers and users about the performance of text mining algorithms. Beyond their use as prototyping tools, line graphs have been employed to provide visualizations for numerous tasks relating to a wide array of text mining operations. Twotypesofvisualizationapproachesrelyingonlinegraphsareparticularlycommon. The first approach involves comparisons across a range of items. By using one axis of the graph to show some measure and the other to itemize elements for comparison, line graphs have been applied to three common analysis techniques: ■Comparisons of the results of different sets of queries, ■Comparisons of a set of common queries run against different document subsets, and 208 Visualization Approaches 0 5 10 15 20 25 30 35 UN OAS NATO ANZAC AU EU Arab League Assoc1 Assoc2 Assoc3 Figure X.13. Line graph showing number of associations for three sets of queries. ■Comparisons of the numbers of concepts that appear under different constraint or quality-measure conditions. Figure X.13 illustrates the first of these three common techniques in which a line graph displays a comparison of the number of associations for two sets of queries. Note the use of the two axes and the plotting of two distinct lines with different symbols (squares and diamonds) identifying data points. The second and arguably most prevalent current use of line graphs in text mining is that of graphs displaying trends or quantities over time. Line charts provide a very easily understood graphical treatment for periodicity-oriented analytics with the vertical axis showing quantity levels and the horizontal axis identifying time periods. See Figure X.14. Line graphs can also be used in hybrids of these two approaches. Using multiline graphs, one can compare various types common to text mining tasks in the context of the time dimension. See example in Figure X.15. Such applications of line graphs benefit from their concision, for a large amount of information can be displayed simultaneously with clarity. Line graphs, however, may not be the most appropriate visualization modality when a text mining analytical task calls for inviting more immediate interactivity from a user. X.3.5 Circle Graphs A circle graph is a visualization approach that can be used to accommodate a large amount of information in a two-dimensional format. It has been referred to as an “at-a-glance” visualization approach because no navigation is required to provide a complete and extremely concise visualization for potentially large volumes of data. X.3 Common Visualization Approaches for Text Mining 209 200 190 180 170 160 150 140 130 120 110 100 90 80 70 60 50 40 30 20 10 0 Documents 12/24/2000 1/21/2001 2/18/2001 3/18/2001 4/15/2001 5/13/2001 8/10/2001 7/8/2001 8/5/2001 9/2/2001 9/30/2001 10/28/2001 Date Osama bin Laden Figure X.14. Line graph showing number of documents containing the entity Osama bin Laden over time. A circle graph is especially useful in visualizing patterns of association rules, though it is also very adaptable to displaying category information (Aumann, Feldman, et al. 1999). The format has been popularized by the widely used commer-cial data mining visualization tool NetMap (Duffet and Vernik 1997). Figure X.16 shows a basic circle graph. Essentially, a circle graph is, as the name suggests, a circular graph around the circumference of which are mapped items. Relations between these items are repre-sented by edges that connect the items across the interior area of the circle. 140 120 100 80 60 40 20 0 1975 1980 1985 1990 1995 2000 Year Number of papers 30 25 20 15 10 5 0 1975 1980 1985 1990 1995 2000 Year Average citations per paper citation analysis bibliometrics semantics visualization Figure X.15. Two examples of multiline graphs comparing trend lines of quantities over time. (From Borner et al. 2003. Reprinted with permission.) 210 Visualization Approaches A B C D E F G H Figure X.16. Circle graph. Style elements, such as the color and thickness of the connecting lines, can be used to correspond to particular types of information about the connection. In addi-tion, color gradients in the connecting lines can be used to show the direction of a relationship. Circle graphs excel at modeling association rules that appear in the answer sets to queries (see Figure X.17). It is common for individual concepts to appear as points around the rim of the circle in association-oriented circle graphs. And their association with another concept is demonstrated by a connecting edge. Several additional visual enhancements are common in association-oriented cir-cle graphs to enable users to have a richer graphic model of underlying textual data. First, it is common for connecting lines to use color gradients (e.g., going from yellow to blue) to show the directionality of an association. Second, a single distinct color (e.g., bright red) might also be used for a connecting line to denote a bidirectional association. Third, the relative thickness of connecting edges may be used to suggest some corresponding information about values relating to the association. Finally, the size, color, and font type chosen for the depiction of concept names around the circum-ference of the circle graph can be used to communicate information visually about particular concepts in a query result set. One method that has been used for enhancing the interactivity of an association-oriented circle graph is to make the graph’s peripheral concept names and interior connecting lines “click-sensitive” jumping points to other information. A user could click or mouse-over a concept name and obtain additional ontological information X.3 Common Visualization Approaches for Text Mining 211 Caffeine Collagen mmp-2 Collagenase app Nicotine decorn Figure X.17. Association-oriented circle graph. relating to the concept, or, by clicking on a linking edge, a user could see the high-lighted association’s position in a list of associations ranked according to some quality measure. Although circle graphs are particularly well-suited to modeling large volumes of association data, it is important to recognize that this visualization approach – like most others – can still have its effectiveness impaired by too many concepts or associations. Therefore, with circle graphs, it is often advisable to offer users easy access to controls over refinement constraints. This allows users to calibrate a circle graph’s own visual feature dimensionality quickly to a level most suitable to a given search task and a particular user’s subjective ability to process the visual information in the graph. Category-Connecting Maps Circle graphs often serve as the basis for category-connecting maps, another visual-ization tool useful in text mining. Beyond the basic taxonomical view afforded by a more traditional information retrieval-type category map, a category-connecting map builds on an association-oriented circle graph by including the additional dimen-sion of category context for the concepts in the graph. Category-connecting maps generally show associations between concepts in sev-eral categories – all within a particular context. For instance, Figure X.18 shows a category-connecting map with associations between individuals in the category People and entities in the category Organization – all within the context of Terrorism. 212 Visualization Approaches Osama bin Laden Organization Mullah Mohammed Omar Person Justice Department Abu Sayyaf HAMAS Palestinian Autho... Northern Alliance Treasury Department U.N. State Department FBI CIA Al Qaeda Taliban Pervez Musharraf Jesse Jackson Michael Kilian Mohammad Hasham Saad Mohammad Zahir Shah Loren Thompson Lee Hancock Ari Fleischer Alan Greenspan Ahmed Rashid Ahmed Shah Massood Rifaat Hussain Tommy Franks Tom Hundley Salem Alhamzi George Bush Dan Goure Kim Schmitz John D. Stufflebeem John Stufflebeem Hamid Karzai Wakil Ahmed Mutta... Vincent Cannistraro Tod Robberson Richard Whittle Richard Myers Mohammed Zahir Shah Gloria Arroyo Jim Landers Mohamed Atta Richard Boucher Nabil al-Marabh Abdul Haq Gregg Jones Kofi Annan Yasser Arafat Stephanie Bunker Mullah Mohammad Omar Donald H. Rumsfeld George Tenet Colin Powell Tony Blair John Ashcroft Abdul Salam Zaeef Robert Mueller Donald Rumsfeld Fatah Laxington Institute Federal Reserve Figure X.18. Category-connecting map of associations in the context of person and organization. In creating category-connecting maps, special attention is typically paid to the for-matting of graphical and text elements on the periphery of the circle. In Figure X.18, concepts are plotted around the periphery of the circle in a way that concentrates con-cepts within each category together. Such concentration of concepts on the periph-ery of the circle can leverage preprocessed hierarchical ordering of concepts within broader “contexts” to speed rendering. However, context concepts and categories for category-connecting maps can also be generated on the fly by various algorith-mic techniques ranging from the leveraging of association rules to more specialized approaches like those discussed in Chapter II relating to the generation of context graphs. Concepts within a given category will typically have their concept labeling all formatted in the same color or font type to reinforce the visual layout technique of showing concepts within categories, and this color or font type will contrast with those used for other categories displayed in the graph. Finally, higher level category labels are often displayed to the center and outside of the “cluster” of their concepts (e.g., the category names Person and Organization are underlined and offset from the circle in Figure X.18). Multiple Circle Graph and Combination Graph Approaches Often, text mining applications that employ circle graphs have graphical interfaces supporting the generation and display of more than one complete circle graph at X.3 Common Visualization Approaches for Text Mining 213 a time. One reason for this is that, because of a circle graph’s strength in showing a large amount of data about a given query set at a glance, multiple circle graphs displayed together can have tremendous value in helping establish explicit or implicit comparisons between different query results. This advantage might be leveraged in a text mining set through the plotting of two or more circle graphs on screen at the same time, each having different refinement constraint values. Another example of this approach is category-connecting maps run against the same document collection and same main category groupings but with different contexts. Each of these examples would allow a user to make side-by-side assessmentsofdifferencesandsimilaritiesinthegraphingpatternsofmultiplegraphs. Figure X.19 illustrates the use of multiple circle graphs in a single visualization. Another technique that relies on showing multiple circle graphs on screen at the same time results from trying to emphasize or isolate “subgraphs,” or to do both, from within a circle graph. For instance, because circle graphs can be used to model so much data all at once, some more subtle relationships can become de-emphasized and obscured by the general clutter of the graph. By allowing a user to click on several items that are part of a main circle graph, a text mining system can offer subgraphs that display only the relationships between these items. Being able to see such subgraphs discretely while viewing the main circle graph as a whole can lead to new forms and levels of user interactivity. Similarly, circle graphs can benefit from side-by-side comparisons with other graphs. For instance, instead of limiting a text mining system’s graphing options to circle graphs and their subgraphs, one could also use simple concept association graphs to graph highlighted relationships shown within an association-oriented circle graph or concept-connecting map. X.3.6 Self-Organizing Map (SOM) Approaches Text mining visualization has benefited from contributions made by research into how artificial neural networks can aid information visualization. Perhaps one of the most important of these contributions is the paradigm of self-organizing maps, or SOMS, introduced by Kohonen in 1982 and first applied in 1991 to problems in information visualization in Lin, Soergel, and Marchionini (1991). SOMs are generated by algorithms that, during a learning phase, attempt to iteratively adjust weighting vectors derived from the relationships found in a high-dimensional statistical data input file into various forms of two-dimensional output maps. Because of this approach, SOMs have advantages in treating and organizing data sets that are extremely large in volume and connecting relationships. WEBSOM One of the most widely known and used applications of SOMs to textual data is WEB-SOM. WEBSOM uses an adapted version of Kohonen’s original SOM algorithm to organize large amounts of data into visualizations that applications designers refer to as “document maps,” which are essentially graphical models similar to topographical maps (see Figure X.20). Shading on the map face displays concentrations of textual data around or near a particular keyword or concept; lighter areas show less concentration. Hence, the interleukin 6 TUMOUR NECROSIS F ... TNF-[alpha] interleukin interleukin-1[beta] interleukin 1 INTERLEUKIN 1[beta] IL-1 IL-10 TNF Interferon IFN IFN-[gamma] interleukin 4 interferon [gamma] IL-8 IL-1[alpha] NF-[kappa]B p65 p50 IL-13 interleukin 8 interleukin 2 LIF interferon-[gamma] PKC mitogen-activated... MAPK epidermal growth ... epidermal growth ... EGF PKA protein kinase A EGFR cAMP-dependent pr... p42 p44 Ras Mitogen-activated... kinase EGF receptor extracellular sig... p38 Raf protein kinase C peroxisome peroxisome prolif... PPAR[gamma] apoE ODC apolipoprotein E CD4 CD8 omithine decarbo... RecA Rad51 Stat Jak Jak-STAT fibronectin FN vitronectin Iaminin green fluonscent... GFP prostaglandin E2 PGE2 MyoD myogenin My15 Rb retinoblastoma PKB Protein kinase B c-fos c-Jun TTR transthyretin Gro ES GroEL focal adhesion ki... FAK paxillin TGF-[beta] transforming grow... calmodulin CaM INOS Inducible nitric ... insulin-like grow... IGF Hh Hedgehog HGF hepatooyte growth... Leptin nerve growth factor NGF PDGF Platelet-derived ... oc osteopontin Alkaline phosphatase Tumor Suppressor p53 Bcl-2 Bcl-xL p21 Bax caspase p63 p73 p21Waf1 Mdm2 TPOE7 E6 AMP-activated pro... AMPK hrombocoleln heplnrecepkr osteocalcin Figure X.19. Side-by-side circle graphs with subgraphs. (From Feldman, Regev, et al. 2003.) 214 X.3 Common Visualization Approaches for Text Mining 215 Digital oil level sensor Click any area on the map to get a zoomed view! Oil filler adapter F01M11/12; GO1F F01M11/04D (N); F01M11/04 F01M F01M1/1 F01D25/-F01M11/04D; F16N Oil metering device for supplying oil to a f Oil metering device for supplying oil to a f Oil addition apparatus Oil to gasoline proportioning device for tw Apparatus for detecting oil level in oil tank Apparatus for monitoring engines Air separation for an oil pump Motor oil change kit Oil pressure monitoring system Oil equalization system for parallel connec-Figure X.20. WEBSOM – during queries, users are guided by a document map via visual cues in a GUI that supports both interactive zooming and browsing support functions. (From Lagus et al. 1999. Reprinted with permission of Springer Science and Business Media.) graphical approach represented by WEBSOM is particularly suited to text mining analytics involving some element of reference to a category. However, the system has proven flexible enough to be applied to other tasks as well. Moreover, although WEBSOM may initially have been geared more toward solving problems in informa-tion retrieval for high-dimensionality document collections, academic attention has been devoted specifically to its uses as a toolkit for building text mining interfaces. A basic WEBSOM document map can be seen in Figure X.21. One of the strongest advantages of WEBSOM – and similar SOM-based sys-tems that it has inspired – has been a proven ability to handle large amounts of data. WEBSOM has built a document map to address more than one million docu-ments, and its automatic algorithm for building document maps is reportedly able to complete a visualization for a small-to-modest document collection (approximately 10,000 documents) in less than a few hours. Another advantage of WEBSOM is the robustness of the interface’s functionality. WEBSOM is a fully zoomable interface that enables sections of a full document map to be repeatedly zoomed at various levels of magnification. The document map GUI is also very interactive in that clicking on a highlighted concept identifier or a sectionofthedocumentmapwillbringuplistsofcorrespondingdocuments,statistical information about the documents, or both typically in a pop-up or separate window. 216 Visualization Approaches Legend Dominance of Highest Components > 40 % < 5 % 35 - 40 % 30 - 35 % 25 - 30 % 20 - 25 % 15 - 20 % 10 - 15 % 5 - 10 % Cluster Boundary SCIENCE >45 Articles in Cluster SCIENCE 30-45 Articles in Cluster SCIENCE <30 Articles in Cluster Figure X.21. WebSOM-like cartographic document map with typical graph legend. (From Borner et al. 2003. Reprinted with permission.) SOM Algorithm Honkela (1997) has summarized the SOM algorithm along the following lines: ■Assume an input dataset of concepts is configured as a table, with the intended output being the mapping of these data onto an array of nodes. The set of input data is described by a vector X(t) ∈Rn, where t is the index of the input data. In terms of output, each node i in the map contains a model vector mi(t) ∈Rn; this model vector has the same number of elements as the input vector X(t). ■The SOM algorithm is stochastic and performs a regression process. Therefore, the initial values of the elements of the model vector, mi(t), may be selected at random. ■Input data are mapped into a location in the output array, the mi(t) of which “matches” best with x(t) in some metric. The SOM algorithm creates an ordered mapping by repeating the following tasks: An input vector x(t) is compared with all the model vectors mi(t); the best-matching element (node) on the map (i.e., the node where the model vector is most similar to the input vector according to some metric) is discerned. The best-matchingnodeontheoutputmapissometimesreferredtoasthe“winner.” The model vectors of the winner and a number of its neighboring nodes (some-times called “neighbors”) in the array are changed toward the input vector according to a customized learning process in which, for each data element input vector x(t), the winner and its neighbors are changed closer to x(t) in the input data space. During the learning process, individual changes may X.3 Common Visualization Approaches for Text Mining 217 actually be contradictory, but the overall outcome in the process results in having ordered values for mi(t) gradually appear across the array. Adaptation of the model vectors in the learning process takes place according to the following equations: mi(t + 1) = mi(t) + α(t)[(t) −mi(t)] for each i ∈Nc(t); otherwise, mi(t + 1) = mi(t), where t is the discrete-time index of the variables, the factor α(t) ∈[0, 1] is a scalar that defines the relative size of the learning step, and Nc(t) describes the neighborhood around the winner node in the map array. Typically, at the beginning of the learning process, the radius of the neighborhood can be quite large, but it is made to consolidate during learning. One suggested method for examining the quality of the output map that results from the running the SOM algorithm is to calculate the average quantization error over the input data, which is defined as E{∥X −mc(X )∥}, where c indicates the best-matching unit (sometimes referred to as the BMU) for x. After training, for each input sample vector, the BMU in the map is searched for, and the average of the individual quantization errors is returned. Several deficiencies, however, have been identified in WEBSOM’s approach. Some have pointed out that WEBSOM’s algorithm lacks both a cost function and any sophisticated neighborhood parameters to ensure consistent ordering. From a practical standpoint some have commented that a user can get “lost” in the interface and its many zoomable layers. In addition, the generalized metaphor of the topo-graphical map is not a precise enough aid in displaying patterns to support all text mining pattern-identification functions. X.3.7 Hyperbolic Trees Initially developed at the Xerox Palo Alto Research Center (PARC), hyperbolic trees were among the first focus and context approaches introduced to facilitate visu-alization of large amounts of data. Relying on a creative interpretation of Poincar´ e’s model of the hyperbolic non-Euclidean plane, the approach gives more display area to apart ofahierarchy(thefocusarea)whilestillsituatingitwithintheentire–though visually somewhat de-emphasized – context of the hierarchy. A widely known com-mercial toolkit for building hyperbolic tree visualization interfaces is marketed by Inxight Software under the name StarTree Studio (see Figure X.22). Hyperbolic tree visualizations excel at analysis tasks in which it is useful for an analyst to see both detail and context at the same time. This is especially true in situations in which an analyst needs to traverse very complex hierarchically arranged data or hierarchies that have very large amounts of nodes. Other more common methods for navigating a large hierarchy of information include viewing one page or “screen” of data at a time, zooming, or panning. How-ever all of these methods can be disorienting and even distracting during intensive visual data analysis. A hyperbolic tree visualization allows an analyst always to keep perspective on the many attached relationships of a highlighted feature. 218 Visualization Approaches Visualizing National Park Svcs Sites A through L M through Z Walnu WupatkiN Alcatraz Island Cabrillo Nation California Nation Channel Island Death Valley Nation Devils Postpile Nation Eugene o'Neil Nation Fort Point Nation Golden Gate Nation John Muir Nation Joshua Tree Nation Juan Bautista Lesson Vocal Lava Beds Nation Manzanar Mojave M through Z Big Cypre Biscaya A throug M throug M through Virginia NPS Map Search Arigona California A through L A throug Utah Figure X.22. Hyperbolic tree for visualizing National Park Service sites. (From Inxight StarTree Studio. Copyright Inxight Software.) Hyperbolic tree visualization tools have from their inception been designed to be highly dynamic aids for textual data exploration. Initially, a hyperbolic tree diagram displays a tree with its root at the center of a circular space. The diagram can be smoothly manipulated to bring other nodes into focus. The main properties that support the capabilities of the hyperbolic tree are that ■elements of the diagram diminish in size as they move outward, and ■there is an exponential growth in the number of potential components. Effectively, these two properties might be described as a form of fisheye distortion andtheabilitytouniformlyembedanexponentiallygrowingstructure.Together,they allow the hyperbolic tree visualization tool to leverage Poincar´ e mapping of the non-Euclidean plane to explore very large hierarchies in a visualization frame relatively limited in size. The hyperbolic tree’s peculiar functionality does more than simply allow a user to interact with a larger number of hierarchical nodes than other more traditional meth-ods or to view a highlighted feature with reference to a richer amount of its context. It also very much encourages hands-on interaction from a user with a hierarchical dataset. Figure X.23 shows another example of a hyperbolic tree. By enabling a user to stretch and pull a complex hierarchy around a focused-on item with a mouse and then skip to another node with a mouse click and view the rest of its hierarchy context at various angles, a hyperbolic tree diagram pro-motes dynamic, visualization-based browsing of underlying data. Instead of being X.3 Common Visualization Approaches for Text Mining 219 economy, business, and finance education health politics entrance teachers preschool parent organization further education adult education tourism and leisure process industry metal and mineral metal goods and engineer media chemicals macroeconomics construction and property company information consumer goods financial and business service computing and energy and resource organized crime international law corporate crime Crime police prison Crime, law and justice rest, conflicts, and war social issues environmental issues lifestyle and leisure News Navigator punishment ciary(system of justice) Figure X.23. Hyperbolic tree visualization of a document collection composed of news articles. (Courtesy of Inxight Software.) semipassive viewers and inputters of query data, users become fully engaged partic-ipants in pattern exploration. X.3.8 Three-Dimensional (3-D) Effects Many text mining systems have attempted to leverage three-dimensional (3-D) effects in creating more effective or flexible visualization models of the complex relationships that exist within the textual data of a documentcollection. 3-Dvisualiza-tions offer the hope that, by increasing the apparent spatial dimensionality available for creating graphic models of representations such as those produced by more com-plex, second-generation, multiple-lattice SOMs, users may be able to examine and interact with models that make fewer compromises than are required by traditional (2-D) hierarchical or node-and-edge representations. Moreover, higher powered 3-D rendering algorithms and wider availability of sophisticated workstation graphics cards create the conditions for making such 3-D visualizations more practical for use in text mining systems than ever before. In 220 Visualization Approaches Figure X.24. A 3-D overview of a scientific author cocitation map suggesting a diverse, uncon-centrated domain. (From Borner et al. 2003.) addition, to supplement navigation in a non-2-D environment, many relatively light-weight VR rendering and exploration environments are available now that can easily be embedded into front-end interfaces. An example of a 3-D network map can be seen in Figure X.24. Despite all of the potential opportunities offered by 3-D treatments of models for information visualization, these models also introduce several new challenges and problems. Two significant problems for using 3-D visualization approaches in text mining are occlusion and effective depth cueing (see example in Figure X.25). Both of these problems are exacerbated in situations in which presentation of high-dimensional data is required. Unfortunately, such situations are precisely those in which text mining applications will be likely to incorporate 3-D visualization tools. Moreover, 3-D visualizations do not generally simplify the process of navi-gating and exploring textual data presentations. Often, some level of specialized navigational operations must be learned by the user of a 3-D projection or VR-based visualization. This can become something of a barrier to inspiring intuitional X.3 Common Visualization Approaches for Text Mining 221 RSN Figure X.25. Presentation from a visualization suggested by Gall et al. (1999). Visualization represents hierarchies over time. However, challenges of element occlusion occur at lower levels of the various hierarchies. Such a presentation may also not be understood intuitively by users. (From Graham 2001. Reprinted with permission, C ⃝2001 IEEE.) navigation of a data presentation. Increased complexity in navigation is generally inversely proportional to stimulating high levels of iterative user interaction. There is no doubt that the verdict will be out for some time on the impact of 3-D treatments on text mining visualization. Certainly, there will be a great deal of continued research and experimentation in an effort to make 3-D approaches more practically useful. Currently, the disadvantages of 3-D approaches outweigh the proposed advantages; 3-D visualization may never be very useful in comprehending nonspatial information like that encountered in text mining. In the short term, designers of text mining systems should carefully evaluate the practical benefits and drawbacks inherent in the potential inclusion of a 3-D visualization tool in their applications before being carried away by the theoretical advantages of such new graphical tools. X.3.9 Hybrid Tools In the discussion of circle graphs in Section X.3.5, it was noted that sometimes com-binations of identical multiple graphs or different types of multiple graphs have a special role in providing analytical information about the results from a text mining query’s answer set. Designers of visualization tools often come up with presentation techniques that might be seen as hybrid forms incorporating components of different visualization formats into a coherent, new form. Three creative examples of hybrid visualization approaches can be seen in Figures X.26, X.27, and X.28. 222 Visualization Approaches 22 21 20 19 18 17 16 15 14 13 12 11 10 9 8 7 6 5 Sat Fri Thu Wed Tue Mon Sun 40 to 30 to 25 to 3 20 to 2 15 to 20 10 to 1 5 to 1 0 to 5 Call Dial Mail Carrie Reset Ring Fail OK Time - Hour OK or Fail Fault Type Conn 8 54 66 2 8 4 5 2 27 8 01 9 12 6 4 15 16 8 4 2 4 2 1 8 4 18 9 11 11 10 42 6 6 6 1 01 16 18 16 21 11 14 00 47 27 57 24 29 9 08 16 1 25 2 4 1 45 6 6 01 8 51 54 15 14 1 1 1 5 5 14 17 19 16 Date ?? Figure X.26. Daisy chart combining aspects of a circle graph and complex comparative his-togram. (Reprinted with permission from James Miller.) One critical driver for the innovation of such forms is the desire to achieve more presentation concision. By supplementing currently well-known graphical formats with additional new elements, one might at least theoretically be able to increase dramatically the amount of information communicated by the graphical tool. One of the possible pitfalls in creating such hybrid forms is overcomplication; ideally, users should be able to understand the major “messages” communicated by a presentation approach without too much potential for confusion. Another potential pitfallisdecreasedvisualclarityofagraph;becausetextminingvisualizationssooften involve overabundance in patterns, more complex visualizations can also result in greater presentation “clutter” issues. Because most designers of text mining systems actually implement visualization approaches initially developed by information visualization specialists, these consid-erations should be weighed when evaluating the possible graphical tool alternatives X.3 Common Visualization Approaches for Text Mining 223 ship money-fx trade corn wheat grain acq earn Low res. interest crude Figure X.27. View of an HSOM or hyperbolic self-organizaing map that projects 3-D elements on a triangularly tesselated hyperbolic tree grid. (From Ontrup and Ritter 2001a. Reprinted with permission of The MIT Press.) Scholarly Communication Crane72 Borgman90,92 White90 White81 Garvey79 Culnan86 McCain90 McCain86 Visualization Kruskal78 Persson94 Olsen93 Tufte83,90 Lorensen97 Robertson93 Foley90 Automatic Indexing Deerwester90 Salton89,90,93 van Rijsbergen79 Landauer97,98 Small94,97 Social Studies of Science Callon83,86,91 Small 85,90 Rip84 van Raan86 van Raan93 Hicks87 Document Co-Citation Small73,74,77,85 Lin97 Figure X.28. Specialized network diagram that includes elements of nodes and links graphs and histogrammatic presentation with 3-D effects and character-based tables. From the Arist Co-Citation Project. (From Borner et al. 2003. Reprinted with permission.) 224 Visualization Approaches for a given text mining application. Designers of text mining systems need to be able to consider several commercial or freeware visualization alternatives from the perspective of their text mining system’s intended functionality. Above all, these system designers need to be careful to avoid the temptation of having “a visualization solution in search of a text mining problem.” Creating the conditions for maximum interaction from a user depends on ensuring a more seamless, compatible fit between a visualization approach’s strengths and the algo-rithmic search techniques that form the core of the text mining situation a particular presentation layer is meant to support. For this reason, it sometimes does pay to look for hybrid approaches. A special-purpose hybrid visualization form may meet the needs of a very specific text mining application in ways better than more generic forms. X.3.10 Citations and Notes Sections X.3.1–X.3.3 The Document Explorer application is described in Feldman, Kloesgen, and Zilberstein (1997a) and Feldman, Fresko, Hirsh, et al. (1998) and summarized in Section VI.5.1. Discussions of relevant hierarchical visualization approaches can be found in Kar-rer and Scacchi (1990); Johnson and Shneiderman (1991); Robertson, Mackinlay, and Card (1991); and Hearst and Karadi (1997). Simple concept graphs are an updating of the simple keyword graphs introduced in Feldman, Kloesgen, and Zilberstein (1997a). A good general discussion of some of the considerations employing DAGs in information visualization can be found in Melancon and Herman (2000), in which the authors make several useful points, including the following: (a) DAGs might be seen as a natural generalization of tree structures, (b) aesthetically pleasing drawings of DAGs are those with the minimum possible number of edge crossings (though this can sometimes be difficult to manage in graphing large datasets), and (c) DAGs can serve as a a kind of intermediary form between tree structures and general graphs. For a review of a DAG-generating program that creates the type of DAG visualizations described and illustrated in Section X.3.2, see Gansner, North, and Vo (1988). See also Gansner et al. (1993). All references to Zhou and Cui’s DAG-based representations of elements of data from the GO Consortium’s Gene Ontology are from Zhou and Cui (2004). Infor-mation on the Gene Ontology can be found in GO Consortium (2001). Sections X.3.4–X.3.7 The two examples of the multiline graphs shown in Figure X.15 are directly from Borner, Chen, and Boyack (2003). At least one examination of 2-D histograms in text mining has suggested that they are not especially useful at displaying some basic types of query results relating to association rules (Wong et al. 2000). Aumann et al. (1999) provides an early treatment of circle graphs in text-oriented knowledge discovery. Rainsford and Roddick (2000) underscores the comprehen-sive “at-a-glance” property that circle graphs have in concisely showing an entire representation of relationships in large amounts of data. The NetMap circle graph X.4 Visualization Techniques in Link Analysis 225 information visualization tool is described in Duffet and Vernik (1997). Information about commercial Netmap products is available at <www.netmap.com>. Important background reference materials on SOMs include Kohonen (1981), Kohonen (1982), Lin et al. (1991), Lin (1992), Kohonen (1995), Kohonen (1997), Lin (1997), Kohonen (1998), and Lagus (2000b). Background reference materials on WEBSOM include Honkela et al. (1997); Honkela, Lagus, and Kaski (1998); Lagus (1998); and Lagus et al. (1999). The SOM algorithm described in Section VI.3.6. has been summarized from Honkela (1997). Beyond WEBSOM, many systems and computer science research projects have incorporated SOM-style visualizations. Some representatives of the wide influence of SOM-style interfaces can be seen in Merkl (1998), Borner et al. (2003), and Yang, Chen, and Hong (2003). HyperbolictreesareintroducedanddiscussedinLampingandRao(1994);Lamp-ing, Rao, and Pirolli (1995); and Munzner and Burchard (1995). StarTree Studio is a product of Inxight Software; additional product information can be found at <www.inxight.com>. All images from StarTree Studio are the property of Inxight Software. The hyperbolic tree representation of the Internet comes from Munzner and Burchard (1995). Another interactive “focus + context” approach, the Table Lens, is discussed in Rao and Card (1994). Section X.3.8–X.3.9 Borner et al. (2003) provides a brief but practical review of some 3-D approaches used in visualizing knowledge domains that would also be applicable to text mining activities Koike (1993) is another useful source. The effects of such things as potential drawbacks as occlusion and effective depth cueing in 3-D visualizations are discussed in Rokita (1996) and Hubona, Shirah, and Fout (1997). The visualization in Figure X.25 appears as a reference in Graham (2001) that originally appeared in Gall et al. (1999). Graham (2001) points out that there is growing consensus that 3-D visualizations are not that useful in comprehending non-spatial information, whereas Cockburn (2004) seems to suggest the opposite view. The Daisy Chart displayed in Figure X.26 is a visualization copyrighted by James Miller of Daisy Analysis (<www.daisy.co.uk>). The daisy chart also appears in Westphal and Bergeron (1998). Figure X.27 illustrates one application of the hyperbolic self-organizing map or HSOM. The HSOM is discussed in Ontrup and Ritter (2001a, 2000b). The hybrid 3-D network diagram illustrated in Figure X.28 comes from Borner et al. (2003). X.4 VISUALIZATION TECHNIQUES IN LINK ANALYSIS Although the discipline of link analysis encompasses many activities, several spe-cific tasks are frequently addressed by a few specialized visualization approaches. In particular, these tasks include ■analysis of a single known concept for the relatedness to, or degrees of separation from, other concepts, and ■the identification and exploration of networks or pathways that link two (or more) concepts. 226 Visualization Approaches Although various generic text mining activities generally involve, as a primary exploratory approach, the investigation of query result sets in a browser supple-mented by visualization techniques, current, state-of-the-art link analysis methods almost always depend on the visualization approach as a central operation. The exploration of pathways and patterns of connectedness is substantially enhanced by visualizations that allow tracking of complex concept relationships within large networks of concepts. Chapter XI focuses on essential link analysis concepts such as paths, cycles, and types of centrality and also offers a detailed, running example involving the con-struction of a model of a social network appropriate to link analysis activities in the form of a spring graph. Although spring graphs are certainly one of the more com-mon graphing forms used in link analysis, many visualization techniques have been applied in this quickly evolving discipline. This section surveys some visualization approaches that have been adapted to support link analysis. X.4.1 Practical Approaches Using Generic Visualization Tools Developers of graphical interfaces to aid in link detection and analysis often slightly modify more generic visualization formats to orient these graphic approaches more toward link detection activities. In particular, simple concept graphs, circle graphs, and hyperbolic trees have been applied to and, in some cases, modified for the support of link detection tasks. Even histograms and line graphs have been put into service for link analytics. For example, a common simple concept association graph could be used to show persons associated with organizations within the context of some other concept. Such a graph could be oriented toward link detection activities by centering the graph on a single known person and allowing the outwardly radiating edges and vertices to constitute a relationship map. In a sense, this type of manipulation of the simple concept association graph cre-ates at least an informal focus for the graph. Ease of following the relationships in the map can be enhanced by stylistic devices: person nodes and labels and concept nodes and labels could be drawn in contrasting colors, edge thickness could be determined by the number of documents in which an association between two linked concepts occurs, and so on. Figure X.29 shows the results of a query for all person concepts with associations to organization concepts within the context of the concept terrorism within a given document collection. After a simple concept association graph was generated from the results of the query, a single person concept, Osama bin Laden, was highlighted and drawn to form a central “hub” for the diagram. All person concepts were identi-fied in a darker typeface, whereas all organization concepts were denoted by a lighter typeface. An analyst can traverse relationships (associations) emanating from Osama bin Laden in a quick and orderly fashion. The methodology also has the advantages of being relatively quick to implement and, often, requiring only some customization of the more standard visualization approaches found bundled with most text mining-type applications. X.4 Visualization Techniques in Link Analysis 227 Taliban Colin Powell Osama bin Laden Mullah Omar Treasury Dept HAMAS CIA State Dept FBI Northern Alliance Richard Boucher Greg Jones Donald Rumsfeld Jim Landers Tony Blair Pervez Musharaff Abdul Haq Abdul Zaeef Al Qaeda George Tenet Robert Mueller John Ashcroft Figure X.29. Graphing results to a search query for all Person concepts with associations to organization concepts within the context of the concept terrorism with the concept Osama bin Laden as central vertex. Of course, there are some notable limitations to this approach. First, there are a rather limited number of nodes radiating out from a central “hub” node that a user can take in at any one time. This limitation can be offset somewhat by zooming and panning capabilities. Second, there is no sophisticated or automatic weighting methodology for empha-sizing stronger or more interesting associations by some sort of visual proximity cue within a confined and manageable visualization space. This is a particularly limit-ing factor in the case of very large node-and-edge graphs. Certainly, one can easily increase line density between nodes or prune nodes from the graph altogether based on some quality measures. In graphs in which very large numbers of nodes and associations are present, there is significant risk that these two limitations will prevent the user from maintaining his or her focus on the central node (because of the need to pan, page down, or zoom to a very large graph) and receiving much information about the comparative relatedness of nodes to the central focus node by means of strong spatial or proximic visual cues. Other types of specialized visualization formats do relatively better jobs in addressing these limitations. X.4.2 “Fisheye” Diagrams Fisheye diagrams show a distorted, lenslike view of a graph to highlight ostended “focal point” detail while maintaining relatively easy viewing of its broader, more global visual context. The term “fisheye” derives from the diagram’s analogy to the super-wide-angle or fisheye lens used in photography (fisheye lenses magnify the image at the focal point while de-emphasizing, but still showing, images at the 228 Visualization Approaches periphery). Fisheye views of data were first proposed by Furnas in 1981 and substan-tially enhanced by Sarkar and Brown (1992). The best fisheye approaches to visualizing data attempt to balance local or high-lighted detail with a global context. Fisheye approaches have been described as being divisible into two categories: distorting and filtering fisheyes. Distorting fisheyes adjust the size of various graphical elements in a diagram to correspond to their interestingness, whereas filtering fisheyes de-emphasize or suppress the display of less interesting data. Distorting Fisheye Views Fisheye diagrams have vertices and edges, like node-and-edge graphs, but must accommodate three main ideas: ■The position of a given vertex in a fisheye view depends on its position in the “normal view” of the diagram and its distance from the fisheye view’s focus. ■The size of a given vertex in the fisheye view depends on its distance from the focus, its size in the normal view, and a value representing the relative importance of this vertex in the global structure. ■The amount of detail in a vertex depends on its size in the fisheye view. Sarkar and Brown (1992) formalized these concepts in the following way: 1. The position of vertex v in the fisheye view is a function of its position in normal coordinates and the position of focus f: P feye(v, f ) = F1(P norm(v), P norm( f )). 2. The size of the vertex in the fisheye view is a function of its size and position in normal coordinates, the position of the focus, and its a priori importance, or API, which is a measure of the relative importance of the vertex in the global structure: Sfeye(v, f ) = F2(Snorm(v), P norm(v), P norm( f ), API(v)). 3. The amount of detail to be shown for a vertex depends on the size of a vertex in the fisheye view and the maximum detail that can be displayed: DTLfeye(v, f ) = F3(Sfeye(v), DTLmax(v)). 4. The visual worth of a vertex depends on the distance between the vertex and the focus in normal coordinates and on the vertex’s API: VW(v, f ) = F4(P norm(v), P norm( f ), API(v)). Fisheye diagrams represent a good fit with the visualization requirements of many link analysis tasks. By applying a fisheye treatment to vertices of a graph that are interesting to a user, he or she can scan, without visual interruption or panning, among many contextual relationships, as shown in the diagrammatic elements presented in the periphery of the graph. Figure X.30 shows some fisheye treatments of a SOM. X.4 Visualization Techniques in Link Analysis 229 (a) (b) (d) (c) Figure X.30. Fisheye treatments of a SOM mapped onto a 20 × 20 grid with various distortion values; this type of display is commonly used in maps of concepts within categories. (From Yang, Chen, and Hong 2003. Reprinted with permission from Elsevier.) Filtering Fisheye Views Filtering fisheye approaches, such as fractal approaches, focus on the control of infor-mation in the creation of display layouts. Such approaches attempt, through approx-imation, to create simpler abstractions of complex structures by filtering the amount of information displayed in a way corresponding to some system- or user-defined threshold. Examples of filtering view approaches are found in Figure X.31. witnesses(13) premises(10) witnesses(13) properly(14) dotts(14) aut(0) Redress(11) premises(10) guy(2) Redess (11) XXXX XXXX XXXX XXXX XXXX XXXX XXXX XXXX XXXX XXXX XXXX XXXX XXXX XXXX XXXX XXXX ayt(11) Figure X.31. Filtering view approaches (fractal view) applied to the same category map at different threshold settings. (From Yang, Chen, and Hong 2003. Reprinted with permission from Elsevier.) 230 Visualization Approaches Yang, Chen, and Hong (2003) has summarized an approach to creating a fractal view of a category map: ■The fractal dimension of a structure D is the similarity dimension of a structure, which is controlled by a scale factor and a branching factor, D = −logrx Nx, where rx represents the scale factor and Nx represents the branching factor. ■Solving the fractal requirement requires that the relation between the number of branches and the scale factor at each node of the structure shown below exist: logrx Nx = constant. ■Formalizing the fractal views entails taking the focus point into account and regarding it as root. Fractal values are propagated to other nodes based on the following formulation: Fractal value of focus point = Ffocus = 1. Fractal values of the child of region x in a category map = Fchild of x = rxFx, where Fx is the fractal value of x, rx = C × N−1/D x , C is a constant, 0 ≤C ≤1, D is the fractal dimension, and Nx is the branching factor. Control in this type of view is maintained by the setting of the threshold values. Regions of the category map with fractal values below the threshold disappear or become diminished. Applications to Link Detection and General Effectiveness of Fisheye Approaches Both distorting and filtering fisheye approaches are particularly useful to link detec-tion operations aimed at performing degree-of-relatedness or degree-of-separation analyses. By being able to maintain a focal point on vertices representing known data, users substantially enhance their ability to identify and explore connections with vertices on the diminished but still viewable periphery of the graph. Moreover, the ability – supported by many fisheye-type interfaces – to move an item that is on the periphery to the focal point quickly through direct manipulation of graph elements while not completely losing sight of the earlier focused-upon vertex (which will have moved, in turn, to the periphery) can be quite important. Indeed, beyond just generally acting to encourage greater interaction with the text mining system, this type of functionality allows users to sift more confidently through relationship data without a feeling of disorientation or “getting lost.” Distorting and filtering fisheye approaches are not mutually exclusive. When dealing with very large volumes of data, link detection operations aimed at discov-ering the network of truly interesting relationships linked to a known concept can be greatly enhanced by being able both (a) to see as much of a peripheral context as possible (via a distorting view approach) and (b) to winnow the overall display of data by means of the threshold setting (via a filtering view algorithm). Yang, Chen, and Hong (2003) found that both distorting and filtering view approaches were substantially more effective (speed measure) in helping users dis-cover information versus having no visualization tool at all. Yang et al. also found that users achieved faster discovery results employing filtering view approaches versus distorting view approaches but found that visualizations incorporating both X.4 Visualization Techniques in Link Analysis 231 witnesses (13) ...(8) ...(0) British (2) Properly (14) Premises (10) Jussdc(8) Redress (11) ...(7) In...(18) Figure X.32. Visualization of a category map relying on both distorting view and filtering view techniques. (From Yang, Chen, and Hong 2003. Reprinted with permission from Elsevier.) distorting view and filtering view functionality were the most effective at increasing the speed of discovering useful data. An example of a visualization incorporating both distorting view and filtering view approaches can be seen in Figure X.32. X.4.3 Spring-Embedded Network Graphs Link analysis activities benefit from visualization approaches that offer quick spatial and layout cues to the relative proximity that certain relations between concepts possess. Spring embedding is a graph generation technique first described by Eades (and later refined in significant ways by both Kamada and Kawai and Fruchterman and Rheingold) that distributes nodes in a two-dimensional plane with some level of separation while attempting to keep connected nodes closer together relative to some form of weighting scheme. Spring graphs are a common form in many academic and commercial text mining applications with an orientation toward link detection such as ClearForest’s ClearResearch (see Figure X.33) and Paul Mutton’s PieSpy social network visualization software (Mutton 2004) (see Figure X.34). In generating a spring-embedded network graph, or spring graph, we regard each node as a kind of “charged particle” within a graph model that simulates a closed-force system. This formulation creates a repulsive force between every pair of nodes in the system. Each edge in the graph, on the other hand, is modeled as a spring that applies an attractive force between the pair of nodes it links. Ultimately, the full spring graph is drawn in iterations that calculate the totality of repulsive and attractive forces acting on nodes within the closed system. At the 232 Visualization Approaches Colin Powell Mullah Mohammad Omar Gregg Jones Donald Rumsfeld Jim Landers Tony Blair Pervez Musharraf Abdul Haq Abdul Salam Zaeef Yasser Arafat Gloria Arroyo Abu Sayyaf Palestinian Autho... Kofi Annan George Tenet U.N. Stephanie Bunker CIA Al Qaeda HAMAS Treasury Department FBI Northern Alliance Robert Mueller Nabil al-Marabh Mohamed Atta John Ashcroft Richard Boucher State Department Osama bin Laden Taliban Figure X.33. Spring graph of person concepts associated with organization concepts in the context of terrorism. close of each iteration, all the nodes in the system are moved according to the forces that were applied during that iteration’s calculations. Typically, in most practical situations, the creation of spring graphs occurs in a multistage process. Running a spring-embedder algorithm is only one stage in Troubadour s1x cilquirm Fox_1_ tmcnulty The_Vulture Mark T-spuer pandora-iXian_ reynir Figure X.34. Simple social network of Internet Relay Chart (IRC) users depicted in a spring graph by the PieSpy social network visualization application. (From Mutton 2004. Reprinted with permission, C ⃝2001 IEEE.) Ramri Binalsahibh Balaji Marwan Al-Shehhi Mohamed Atta George W.Bush George Bush Tom Hundley Ahmed Shah Massood Bob Graham Vlaliair Putin Kim Schmitz Jamel Ahmed Al-Fedi Carl Levin Steven Enerson Yasser Arafat Jim Landers Hamid Mir Khalid Ahaidhar Rudolph Giruliani Robert Mueller Nawaf Albermi George Pataki Mohammed Javed Azmath Ahmed Hannan John Ashcroft Kay Nehm Mullah Mohamed Omar Saddam Hussain Bill Clinton Jesse Jackson Ari Fleischer Donald H.Rumsfeld Abdul Salem Zaeef Gregory Katz Tony Blair Jacques Chirac Ariel Sharon Pervez Musharraf Igor lvanov Colin Powell Dick Cheney Donald Rumsfeld Richard Myers Tom Brokaw Trent Lott Tom Daschle Ayub Ali Khan Karim Koubriti Ramzi Yousef Lord Robertson Osama bin Laden Shimon Peres Dayna Curry Hearher Mercer Figure X.35. GUI with visualization of disconnected spring graph showing person co-occurrence patterns. 233 234 Visualization Approaches this process, which would customarily also include some customized preprocessing routines to reduce complexity and heuristics to help establish clusters and perform other processes to promote faster generation of spring graphs in real-time graph-rendering situations. A full example of the construction of a social network spring graph can be found in Chapter XI. Spring graphs can range in size from a handful of nodes to the hundreds of thousands. Spring graphs whose nodes are all linked by edges are called connected spring graphs; those in which discrete networks of nodes appear are referred to as disconnected spring graphs (see Figure X.35). Link detection applications leverage spring graphs to provide visual cues in net-work maps in which edge length corresponds to the actual relatedness of two nodes. These visual cues allow a user to visually trace out degrees of relatedness and sep-aration quickly, making pattern exploration more effective. Moreover, the spring graphs’ ability to model extremely large networks makes them doubly useful in link detection activities involving very large data collections. X.4.4 Critical Path and Pathway Analysis Graphs Link analysis can also be visualized through directed graphs that show the linked events or paths of interrelated actions. Critical path diagrams are typically based on a graphical model called an activity network, which is a form of DAG. Unlike most DAGs, in which emphasis is usually placed on the vertices of the graph, critical path diagrams equally emphasize the nodes – which typically represent either entities or events – and the edges – which can represent tasks, actions, or decisions. Figure X.36 shows a rudimentary critical path diagram. In such diagrams, a critical path is a chain of specific nodes and edges – or entities events, and the tasks or actions that connect them – that demonstrate some level of interestingness. As in Figure X.36, the patterns formed by such chains of nodes Start Finish Task Critical Path Figure X.36. Critical path diagram. X.5 Real-World Example: The Document Explorer System 235 and edges can be highlighted by stylistic elements in the visualization process (e.g., a different color for edges that link nodes in this chain, etc.). Frequently, though not always, these critical paths will have an identifiable start and finish and thus constitute a directed subgraph that is part of a wider activity network. Critical path graphs are a staple part of link detection activities aimed at investiga-tionsofcriminalactivities.Incrimeanalysisvisualizationgraphs,nodesmayrepresent both entities (persons, places, items) and events (crimes, pretrial proceedings, trials). Also, a timeline may be introduced to frame actions that occur over time. Visualizations that support critical path analysis share similarities with the graphic approaches used in pathways analysis for genomics and proteomics research, though there are also some differences. Link detection systems emphasize the search for chains or pathways of interactions between proteins, drugs, and diseases in directed graphs. Edges in these directed graphs are often highlighted in color coding to identify a pathway – but this color coding of edges is also used to specify different types of interactions. X.4.5 Citations and Notes Sections X.4–X.4.3 Fisheye views were introduced by G. Furnas; probably the best early description is in Furnas (1986). Subsequently, Sarkar and Brown (1992) added useful upgrades to fisheye views of data and abstracted the general algorithmic approach used to generate fisheye views. The algorithmic formulation for fisheye views comes from Sarkar and Brown (1992). Yang, Chen, and Hong (2003) provides a good treatment of distorting and filtering approaches taken with fisheye views; Noik (1996) also contains a useful discussion. Figures X.30, X.31, and X.32, as well as the generalized approach to creating a fractal view of a category map discussed in Section VI.4.2, have been summarized from Yang, Chen, and Hong (2003). Yang et al. apply various fisheye approaches to a category map generated using a Kohonen-style SOM. Koike (1995) offers very useful background on the use of fractal approaches as filtering-view techniques. Sections X.4.4–X.4.5 Spring-embedded network graphs were introduced in Eades (1984) and refined in several subsequent papers – perhaps most notably, Kamada and Kawai (1989) and Fruchterman and Reingold (1991). More on ClearForest’s ClearResearch product can be found at <www.clearforest.com>. Further discussion of PieSpy can be found in Mutton and Rodgers (2002) and Mutton (2004). Mutton and Golbeck (2003) suggests the formulation of a spring graph as a closed-force system in which every node is a “charged particle.” The spring graph in Figure X.34 comes from Mutton (2004). X.5 REAL-WORLD EXAMPLE: THE DOCUMENT EXPLORER SYSTEM Initially developed in 1997, Document Explorer is a full-featured text mining system that searches for patterns in document collections. Such a collection represents an application domain, and the primary goal of the system is to derive patterns that 236 Visualization Approaches provide knowledge about this domain. The derived patterns can be used as the basis for further browsing and exploration of the collection. DocumentExplorersearchesforpatternsthatcapturerelationsbetweenconcepts in the domain. The patterns that have been verified as interesting are structured and presented in a visual user interface allowing the user to operate on the results, to refine and redirect mining queries, or to access the associated documents. Like many general text mining systems, Document Explorer focuses on the three most common patterntypes(e.g.,frequentsets,associations,distributions);however,italsosupports exploration of textual data by means of keyword graphs. Perhaps most notably for a real-world system of its time frame, Document Explorer provides a well-rounded suite of complementary browsing and visualization toolstofacilitateinteractiveuserexplorationofitsdocumentcollection.Examination of Document Explorer with this in mind can provide useful insights into how a prac-tical text mining system leverages presentation-layer tools. The Document Explorer system contains three main modules. A diagram of the overall Document Explorer system architecture is shown in Figure X.37. The first module is the backbone of the system and includes the KDTL query front end (see Section II.3), into which the user can enter his or her queries for patterns; the inter-preter,whichparsesaqueryandtranslatesitintofunctioncallsinthelowerlevels;and the data mining and the data management layer. These two layers are responsible for the actual execution of the user’s query. The data mining layer contains all the search and pruning strategies that can be applied for mining patterns. The main patterns offered in the system are frequent concept sets, associations, and distributions. The embedded search algorithms control the search for specific pattern instances within the target database. This level also includes the refinement methods that filter redundant information and cluster closely related information. The data manage-ment layer is responsible for all access to the actual data stored in the target database. This layer isolates the target database from the rest of the system. The second module performs source preprocessing and categorization functions. This module includes the set of source converters and the text categorization soft-ware. It is responsible for converting the information fetched from each of the avail-able sources into a canonical format for tagging each document with the prede-fined categories, and for extracting all multiword terms from the documents. In this preprocessing component, the system extracts all the information that will subse-quently be used by the data mining methods. The target database is represented as a compressed data structure. Besides the target database, the text mining methods in Document Explorer exploit a knowledge base on the application domain. The terms of this domain are arranged in a DAG and belong to several hierarchically arranged categories. In the Reuters newswire col-lection used in this example, the main categories correspond to countries, economic topics, persons, and so on. Each category (e.g., economic topics) has, for example, subcategories such as currencies and main economic indicators. Relations between these categories give further background knowledge. The knowledge base for the Reuters collection includes relations between pairs of countries (such as countries with land boundaries), between countries and persons, countries and commodities, and so on. These relations can be defined by the user or transformed by special X.5 Real-World Example: The Document Explorer System 237 User Preprocessing Tasks Categorization, Feature/ Term Extraction News and Email WWW FTP Resources Other Online Resources Processed Document Collection Prepared/Compressed Intermediate Representation Text Mining Discovery Algorithms Background Knowledge Base DAG Presentation Layer GUI, Visualization KDTL Query Interpreter Y-Axis X-Axis Explore KDTL Query Figure X.37. Architecture of the Document Explorer system. utilities from generally available sources such as the CIA World Fact Book or com-panies’ home pages. Finally, the third module performs presentation-layer functions and is responsi-ble for providing an attractive set of GUI-based text mining tools and graph-based visualization techniques that give the user a much easier access to the system. Simple concept graphs are a special interactive visualization technique to present data min-ing results. Simple concept graphs extend the notion of association rules to relations between keywords and phrases occurring in different documents. The focus of the following functional descriptions is on this presentation layer module. X.5.1 Presentation-Layer Elements Visual Administrative Tools: Term Hierarchy Editor To make full use of Document Explorer’s knowledge discovery tools, the docu-ments’ annotations are grouped into categories of related terms (e.g. country names, machine parts, etc.) and placed in a hierarchical structure. The Term-Hierarchy editor, included in Document Explorer, provides a graphical tool for easy construction and 238 Visualization Approaches manipulation of such hierarchies. Document Explorer also comes with a predefined term hierarchy for common topics. The Knowledge Discovery Toolkit Document Explorer places extensive visualization and browsing tools at the user’s disposal for viewing the results of the discovery process. The user is provided with dynamic browsers, which allow dynamic drill-down and roll-up in order to focus on the relevant results. Any part of the discovery process can either be applied to the entire collection or to any subsets of the collection. Throughout the mining operation, the system maintains the links to the original documents. Thus, at any stage in the discovery process, the user can always access the actual documents that contributed to the discovered pattern. Document Explorer tools can be grouped into four main categories: Browsers, Profile Analysis, Clustering, and Pattern Discovery. In addition, the system provides novel visualization techniques. Browsers The Document Explorer discovery process starts at the browsing level. Browsing is guided by the actual data at hand, not by fixed, rigid structures. Document Explorer provides two dynamic, content-based browsers: distribution browser, and the interactive distribution browser. ■Distribution Browser. The distribution browser presents the user with the fre-quency of all terms (concepts) in the collections grouped by category and allows the collection to be browsed based on these frequencies. In addition, the user can specify a base concept, and the browser will present him or her with the dis-tribution of all other concepts with respect to the base concept. With this tool, the user can immediately find the most relevant term related to whatever he or she is interested in. For example, given a collection of news articles, the user may immediately learn that the main business of Philip Morris is tobacco, or that Wang Yeping is strongly affiliated with China (she is the President’s wife). This information is obtained before even reading a single document. At any time, the user may drill down and access the actual documents of interest. ■Interactive Distribution Browser. The interactive distribution browser provides the user with a flexible, interactive browsing facility, allowing him or her to navi-gate through the data while being guided by the data itself (see Figure X.38). This browser allows the user to zoom in and out on sets of concepts in the collection and obtain online information on the distribution of these concepts within the collection and their relation to other concepts. At any time, the user may drill down and access any document of interest by first clicking on a term in the inter-active distribution browser’s distribution tree GUI, hitting a button to locate all documents containing the term, and then choosing from a list of titles for these documents to access the full text of the document. Visualization Tools Document Explorer is equipped with a suite of visualization tools. These aid the user in gaining a quick understanding of the main features of the collection. The X.5 Real-World Example: The Document Explorer System 239 Figure X.38. The GUI for Document Explorer’s interactive distribution browser. (From Feldman, Kloesgen, and Zilberstein 1997b.) visualization tools afford a graphical representation of the connection between terms (concepts) in the collection. The graphical representations provide the user with a high-level, bird’s-eye summary of the collection. Three of Document Explorer’s main visualization tools – simple concept graphs, trend graphs, and category connection maps – are described here. ■Simple Concept Graphs. As described in Section IV.3.1, a simple concept graph in Document Explorer consists of a typical set of graph vertices and edges rep-resenting concepts and the affinities between them. A simple concept graph in Document Explorer is generally defined with respect to a context, which deter-mines the context in which the similarity of keywords is of interest. Figure X.39 shows a simple concept graph for the “country” category in the context of “crude oil,” while Figure X.40 illustates a simple concept association graph with multiple categories but only one vertex. In Document Explorer, simple concept graphs can either be defined for the entire collection, or for subsets of the collection, and for arbitrarily complex contexts (see Figure X.41). The system provides the user with an interactive tool for defining and refining the graphs. ■Trend Graphs. Trend graphs (see Section II.1.5) provide a graphical represen-tation of the evolution of the collection. The user is presented with a dynamic picture whose changes reflect the changes in the collection. The user can focus on any slice in time and obtain the state of the information at the given time. The user can also define the granularity at which the information is analyzed and presented. ■Category Connection Maps. This visualization tool enables the user to view the connections between several different categories in relation to a given context. 240 Visualization Approaches saudi_arabia 6 6 6 6 6 6 6 6 kuwait iraq 10 18 uk 8 8 8 9 9 17 25 8 japan iran bahrain canada usa ussr ecuador venezuela turkey 12 12 10 greece 41 cyprus Figure X.39. A Document Explorer simple concept graph – “Countries” in the context of “Crude Oil.” (From Feldman, Kloesgen, and Zilberstein 1997b.) Figure X.42 presents the connections between the categories: people, brokerage houses, and computer companies within the context of mergers. (Some similar sample implementations of the circle graph as category connection map are described in Section XII.2.2.) X.5.2 Citations and Notes For a comprehensive overview of Document Explorer, see Feldman, Kloesgen, and Zilberstein (1997a, 1997b). The original Document Explorer development team included Ronen Feldman, Yonatan Aumann, David Landau, Orly Lipshtat, Amir Zilberstien, and Moshe Fresko. uk canada japan france usa housing money supply bop gnp trade 44 36 207 21 18 28 21 329 23 26 17 76 18 west germany Figure X.40. Simple concept association graph from Document Explorer – many categories but one vertex. (From Feldman, Kloesgen, and Zilberstein 1997b.) X.5 Real-World Example: The Document Explorer System 241 0.77 10 iraq kuwait 0.31 10 iraq 32 0.52 25 iran 71 iran kuwait 0.36 17 0.49 13 kuwait 32 0.19 18 0.35 7 iran usa 0.3 41 usa 290 Figure X.41. Simple concept graph from Document Explorer – interesting concept sets and their associations context: crude oil; categories: countries. (From Feldman, Kloesgen, and Zilberstein 1997b.) alfred goldman bette massick bill milton bill vogel gil amelio greg nie jeffrey logsdon jill krutick kleinwort benson ladenburg thalmann laura lederman lawrence cohn louis gerstner marco landi marty kearney michael spindler philip anschutz pieter hartsook ralph bloch roxane googin samuel zell scott mcadams stephen wozniak steve mcclellan tim bajarin tony dwyer william blair goldman sachs paine webber inc morgan stanley inc merrill lynch inc smith barney inc bear stearns co international business machines inc sun microsystems inc Computer Companies Brokerage Houses people Figure X.42. Category map for “People,” “Brokerage Houses,” and “Computer Companies” with respect to “Mergers.” (From Feldman, Fresko, Hirsh, et al. 1998.) XI Link Analysis Based on the outcome of the preprocessing stage, we can establish links between enti-ties either by using co-occurrence information (within some lexical unit such as a doc-ument, paragraph, or sentence) or by using the semantic relationships between the entities as extracted by the information extraction module (such as family relations, employment relationship, mutual service in the army, etc.). This chapter describes the link analysis techniques that can be applied to results of the preprocessing stage (information extraction, term extraction, and text categorization). A social network is a set of entities (e.g., people, companies, organizations, univer-sities, countries) and a set of relationships between them (e.g., family relationships, various types of communication, business transactions, social interactions, hierarchy relationships, and shared memberships of people in organizations). Visualizing a social network as a graph enables the viewer to see patterns that were not evident before. We begin with preliminaries from graph theory used throughout the chapter. We next describe the running example of the 9/11 hijacker’s network followed by a brief description of graph layout algorithms. After the concepts of paths and cycles in graphs are presented, the chapter proceeds with a discussion of the notion of cen-trality and the various ways of computing it. Various algorithms for partitioning and clustering nodes inside the network are then presented followed by a brief description of finding specific patterns in networks. The chapter concludes with a presentation of three low-cost software packages for performing link analysis. XI.1 PRELIMINARIES We model the set of entities and relationships as a graph, and most of the operations performed on those sets are modeled as operations on graphs. The following notation is used throughout the chapter: Let V = {V 1, V 2, V 3, . . . V n} be a set of entities extracted from the documents. A binary relation R over V is a subset of V × V. 242 XI.1 Preliminaries 243 1 2 3 4 5 Figure XI.1. A simple undirected network with V = {1, 2, 3, 4, 5}, R1 = {(1, 2), (1, 3), (2, 3), (3, 4), (3, 5)} and N = (V , R1). We use the prefix notation for relations – that is, if X and Y are related by relation R1, then it will be denoted by R1(X, Y). Examples of such binary relations are friendship, marriage, school mates, army mates, and so on. A network N is a tuple (V, R1, R2, R3 . . . Rm), where Ri (1 ≤i ≤m) is a binary relation over V. A visual representation of N is shown in Figure XI.1. We can also describe a binary relation R using a binary matrix M, where Mi j = 1 if R(V i, Vj), and 0 otherwise. For example, the matrix that represents the relation R ⎛ ⎜ ⎜ ⎜ ⎜ ⎜ ⎝ 0 1 1 0 0 1 0 1 0 0 1 1 0 1 1 0 0 1 0 0 0 0 1 0 0 ⎞ ⎟ ⎟ ⎟ ⎟ ⎟ ⎠ shown in Figure XI.1 is as follows: Each row in the matrix corresponds to the connection vector of one of the ver-tices. The ith row (M i1, . . . ,M in) corresponds to the connection vector of the ith vertex. The set of edges connecting all vertices in the undirected graph is denoted by E, and |E| is the number of edges in the graph. If the graph is directed, then the lines that connect the vertices are called arcs. Our focus is mostly on undirected networks and hence also on undirected graphs, and so we use vertices and edges. The network can also have weights or values attached to each of its edges. The weight function denoted W : E →R (the real numbers) is attaching a real value to each edge. If there are no values for any of the edges, then ∀e ∈E, W(e) = 1. If the relations R are not symmetric, then G = (V, E) is a directed graph: A sequence of vertices (v1, v2, . . . , vk) in G is called a walk if (vi, vi+1) ∈E; i = 1 . . . k −1. A sequence of vertices (v1, v2, . . . , vk) in G is called a chain if ((vi, vi+1) ∈ E||(vi+1, vi) ∈E)i = 1 . . . k −1. In a walk, we care about the direction of the edge, whereas in a chain we do not. A path is a walk in which no vertices, except maybe the initial and terminal ones, are repeated. A walk is simple if all its edges are different. A cycle is a simple path of at least three vertices, where v1 = vk. The length of the path (v1, v2, . . . , vk) is k−1. 244 Link Analysis A special type of network is a two-mode network. This network contains two types of vertices, and there are edges that connect the two sets of vertices. A classic example would be a set of people and a set of events as vertices with edges connecting a person vertex to an event vertex if the person participated in the event. If there are no self-loops in the network (i.e., a vertex can not connect to itself), then the maximal number of edges in an undirected network with n vertex is n(n − 1)/2. Such network, in which each vertex is connected to every other vertex, is also called a clique. If the number of edges is roughly the same as the number of vertices, we say that the network is sparse, whereas if the network is close to being a clique we say that it is dense. We can quantify the density level of a given undirected network by using the following formula: ND (Network Density) = |E| n(n−1) 2 = 2|E| n(n−1) Clearly 0 ≤ND ≤1. Similarly, ND for a directed network would be |E| n(n−1) For example ND for the network of Figure XI.1 is 2·5 5·4 = 0.5. XI.1.1 Running Example: 9/11 Hijackers We have collected information about the 19 9/11 hijackers from the following sources: 1. Namesofthe19hijackers,andtheflightstheyboardedweretakenfromtheFBI site (see Table XI.1). 2. Prior connections between the hijackers are based on information col-lected from the Washington Post site given below. If there was a connection between n ≥2 people, it was converted to C(n, 2) symmetric binary rela-tions between each pair of people. < nation/graphics/attack/investigation 24.html.> The undirected graph of binary relations between the hijackers is shown in Figure XI.2. The graph was drawn using Pajek dedicated freeware link analy-sis software (Batagelj and Mrvar 2003). More details on Pajek are presented in Section XI.7.1. The 19 hijackers boarded 4 flights, and in Table XI.1 we can see the names of the hijackers who boarded each flight. The flight information is used when we discuss the various clustering schemes of the hijackers. XI.2 AUTOMATIC LAYOUT OF NETWORKS To display large networks on the screen, we need to use automatic layout algo-rithms. These algorithms display the graphs in an aesthetic way without any user intervention. The most commonly used aesthetic objectives are to expose symmetries and to makethedrawingascompactaspossibleor,alternatively,tofillthespaceavailablefor XI.2 Automatic Layout of Networks 245 Ahmed Alnami Fayez Ahmed Ahmed Alhaznawi Nawaq Alhamzi Khalid Al-Midhar Mohamed Atta Marwan Al-Shehhi Hani Hanjour Majed Moqed Salem Alhamzi Abdulaziz Alomari Ahmed Alghamdi Ziad Jarrahi Hamza Alghamdi Mohald Alshehri Saeed Alghamdi Satam Al Suqami Waleed M. Alshehri Wail Alshehri Figure XI.2. Connections between the 9/11 hijackers. the drawing. Many of the “higher level” aesthetic criteria are implicit consequences of the ■minimized number of edge crossings, ■evenly distributed edge length, ■evenly distributed vertex positions on the graph area, ■sufficiently large vertex-edge distances, and ■sufficiently large angular resolution between edges. XI.2.1 Force-Directed Graph Layout Algorithms Force-directed or spring-based algorithms are among the most common automatic network layout strategies. These algorithms treat the collection of vertices and edges as a system of forces and the layout as an “equilibrium state” of the system. The edges between vertices are represented as an attractive force (each edge is simulated by Table XI.1. The 19 Hijackers Ordered by Flights Flight 77: Pentagon Flight 11: WTC 1 Flight 175: WTC 2 Flight 93: PA Khalid Al-Midhar Satam Al Suqami Marwan Al-Shehhi Saeed Alghamdi Majed Moqed Waleed M. Alshehri Fayez Ahmed Ahmed Alhaznawi Nawaq Alhamzi Wail Alshehri Ahmed Alghamdi Ahmed Alnami Salem Alhamzi Mohamed Atta Hamza Alghamdi Ziad Jarrahi Hani Hanjour Abdulaziz Alomari Mohald Alshehri 246 Link Analysis Nawaq Alhamzi Hani Hanjour Khalid Al-Midhar Salem Alhamzi Majed Moqed Ahmed Alghamdi Abdulaziz Alomari Mohamed Atta Marwan Al-Shehhi Wail Alshehri Waleed M. Alshehri Satam Al Suqami Ziad Jarrahi Fayez Ahmed Mohald Alshehri Hamza Alghamdi Saeed Alghamdi Ahmed Alnami Ahmed Alhaznawi Figure XI.3. KK layout of the hijackers’ graph. a spring that pulls the vertices together), whereas distinct vertices are pushed apart by some constraint to help prevent them from being drawn at the same point. The method seeks equilibrium of these contradicting constraints. The first such algorithm was introduced by Eades (Eades 1984). Following Eades, two additional layout algo-rithms were introduced by Kamada and Kawai (KK) (Kamada and Kawai 1989) and Fruchterman and Reingold (FR) (Fruchterman and Reingold 1991). Kamada and Kawai’s (KK) Method Utilizing Hooke’s law, Kamada and Kawai modeled a graph as a system of springs. Every two vertices are connected by a spring whose rest length is proportional to the graph-theoretic distance between its two endpoints. Each spring’s stiffness is inversely proportional to the square of its rest length. The optimization algorithm used by the KK method tries to minimize the total energy of the system and achieves faster convergence by calculating the derivatives of the force equations. One of the main benefits of the KK method is that it can be used for drawing weighted graphs if the edge lengths are proportional to their weights. The KK method proceeds by moving a single vertex at a time, choosing the “most promising” vertex – that is, the one with the maximum gradient value. In Figure XI.3 we can see the graph shown in Figure XI.2 drawn by using the KK layout. Unlike the circular drawing of Figure XI.2 in which it is hard to see who the leaders of the groups are, we can see here that the main leaders are Mohamed Atta, Abdulaziz Alomari, and Hamza Alghamdi. Fruchterman–Reingold (FR) Method This method utilizes a simple heuristic approach to force-directed layout that works surprisingly well in practice. The underlying physical model roughly corresponds to electrostatic attraction in which the attractive force between connected vertices is balanced by a repulsive force between all vertices. The basic idea is just to calculate the attractive and repulsive forces at each vertex independently and to update all vertices iteratively. As in simulated annealing, the maximum displacement of each XI.2 Automatic Layout of Networks 247 Saeed Alghamdi Ahmed Alnami Ahmed Alhaznawi Hamza Alghamdi Ahmed Alghamdi Mohald Alshehri Abdulaziz Aloman Ziad Jarrahi Fayez Ahmed Mohamed Atta Satam Al Suqami Waleed M.Alshehri Wail Alshehri Marwan Al-Shehhi Khalid Al-Midhar Salem Alhamzi Majed Moqed Nawaq Alhamzi Hani Hanjour Figure XI.4. FR layout of the hijackers’ graph. vertex in any iteration is limited by a constant that is slightly decreased with each iteration. In Figure XI.4 we can see the graph shown in Figure XI.2 drawn by using the FR layout. For both KK and FR, the relations between vertices must be expressed as dis-tances between the vertices. For both algorithms we need to build a “dissimilar-ity” matrix. In the KK algorithm this matrix is constructed from geodesic distances between vertices, whereas in the FR algorithm the matrix is constructed directly from adjacencies between the vertices. Spring-based methods are very successful with small-sized graphs of up to around 100 vertices. Simulated annealing has also been successfully applied to the layout of general undirected graphs (Davidson and Harel 1996). Although force-directed methods are quite useful in automatically exposing most of the symmetries of the given graphs, they share several disadvantages: ■They are computationally expensive, and hence minimizing the energy function when dealing with large graphs is computationally prohibitive. ■Because all methods rely on heuristics, there is no guarantee that the “best” layout will be found. ■The methods behave as black boxes, and thus it is almost impossible to integrate additional constraints on the layout (such as fixing the positions of certain vertices or specifying the relative ordering of the vertices) ■Even when the graphs are planar it is quite possible that we will obtain edge crossings. 248 Link Analysis ■The methods try to optimize just the placement of vertices and edges while ignor-ing the exact shape of the vertices or the possibility the vertices have labels (and hence the labels, vertices, or both may overlap each other). XI.2.2 Drawing Large Graphs A fast algorithm for drawing general graphs with straight edges was proposed by Harel and Koren based on the work of Hadany and Harel (Hadany and Harel 2001). Their algorithm works by producing a sequence of improved approximations of the final layout. Each approximation allows vertices to deviate from their final place by an extent limited by a decreasing constant r. As a result, the layout can be com-puted using increasingly coarse representations of the graph in which closely drawn vertices are collapsed into a single vertex. Each layout in the sequence is generated very rapidly by performing a local beautification on the previously generated layout. The main idea of Hadany and Harel’s work is to consider a series of abstractions of the graph called coarse graphs in which the combinatorial structure is significantly simplified but important topological features are well preserved. The energy mini-mization is divided between these coarse graphs in such a way that globally related properties are optimized on coarser graphs, whereas locally related properties are optimized on finer graphs. As a result, the energy minimization process considers only small neighborhoods at once, yielding a quick running time. XI.3 PATHS AND CYCLES IN GRAPHS Given two vertices in a directed graph, we can compute the shortest path between them. The diameter of a graph is defined as the length of the longest shortest path between any two vertices in the graph. Albert et al. (Albert, Jeong, and Barabasi 1999) found that, when the Web contained around 8 × 108 documents, the average shortest path between any 2 pages was 19. The interpretation of the shortest path in this case is the smallest number of URL links that must be followed to navigate from one Web page to the other. There are many kinds of paths between entities that can be traced in a dataset. In the Kevin Bacon game, for example, a player takes any actor and finds a path between the actor and Kevin Bacon that has less than six edges. For instance, Kevin Costner links to Kevin Bacon by using one direct link: Both were in JFK. Julia Louis-Dreyfus of TV’s Seinfeld, however, needs two links to make a path: Julia Louis-Dreyfus was in Christmas Vacation (1989) with Keith MacKechnie. Keith MacKechnie was in We Married Margo (2000) with Kevin Bacon. You can play the game by using the following URL: . A similar idea is also used in the mathematical society and is called the Erd¨ os number of a researcher. Paul Erd¨ os (1913–1996) wrote hundreds of mathematical research papers in many different areas – many in collaboration with others. There is a link between any two mathematicians if they coauthored a paper. Paul Erd¨ os is the root of the mathematical research network, and his Erd¨ os number is 0. Erd¨ os’s coauthors have Erd¨ os number 1. People other than Erd¨ os who have written a joint paper with someone with Erd¨ os number 1 but not with Erd¨ os have Erd¨ os number 2, and so on. XI.4 Centrality 249 In Figure XI.5 we can see the split of the hijackers into five levels according to their distance from Mohammed Atta. The size of the little circle associated with each hijacker manifests the proximity of the hijacker to Atta; the larger the cir-cle, the shorter the geodesic (the shortest path between two vertices in the graph) between the hijacker and Atta. There are ten hijackers who have a geodesic of size 1, four hijackers who have a geodesic of size 2, one hijacker who has a geodesic of size 3, onehijackerwhohasageodesicofsize4,andfinallytwohijackerswhohaveageodesic of size 5. A much better visualization of the different degree levels can be seen in Figure XI.6. The diagram was produced by using Pajek’s drawing module and select-ing Layers | in y direction. The various levels are coded by the distance from the nodes with the highest degree. Connections are shown just between entities of different levels. XI.4 CENTRALITY The notion of centrality enables us to identify the main and most powerful actors within a social network. Those actors should get special attention when monitoring the behavior of the network. Centrality is a structural attribute of vertices in a network; it has nothing to do with the features of the actual objects represented by the vertices of the network (i.e., if it is a network of people, their nationality, title, or any physical feature). When dealing with directed networks we use the term prestige. There are two types of prestige; the one defined on outgoing arcs is called influence, whereas the one defined on incoming arcs is called support. Because most of our networks are based on co-occurrence of entities in the same lexical unit, we will confine our attention to undirected networks and use the term centrality. The different measures of centrality we will present can be adapted easily for directed networks and measure influence or support. Five major definitions are used for centrality: degree centrality, closeness central-ity, betweeness centrality, eigenvector centrality, and power centrality. We discuss these in the next several sections. XI.4.1 Degree Centrality If the graph is undirected, then the degree of a vertex v ∈V is the number of other vertices that are directly connected to it. Definition: degree(v) = |{(v1, v2) ∈E | v1 = v or v2 = v}| If the graph is directed, then we can talk about in-degree or out-degree. An edge (v1, v2) ∈E in the directed graph is leading from vertex v1 to v2. In-degree(v) = |{(v1, v) ∈E}| Out-degree(v) = |{(v, v2) ∈E}| If the graph represents a social network, then clearly people who have more connections to other people can be more influential and can utilize more of the resources of the network as a whole. Such people are often mediators and dealmakers in exchanges among others and are able to benefit from this brokerage. When dealing with undirected connections, people differ from one another only in how many connections they have. In contrast, when the connections are directed, Hamza Alghamdi Ahmed Alghamdi Saeed Alghamdi Ahmed Alnami Ahmed Alhaznawi Abdulaziz Alomari Mohald Alshehri Fayez Ahmed Ziad Jarrahi Satam Al Suqami Waleed M. Alshehri Wail Alshehri Marwan Al-Shehhi Mohamed Atta Khalid Al-Midhar Salem Alhamzi Majed Moqed Hani Hanjour Nawaq Alhamzi Figure XI.5. Computing the shortest distance between Atta and all other 18 hijackers. 250 XI.4 Centrality 251 Ahmed Alnami Ahmed Alhaznawi Saeed Alghamdi Hamza Alghamdi Ahmed Alghamdi Mohald Alshehri Hani Hanjour Nawaq Alhamzi Khalid Al-MidharMarwan Al-Sher Majed Moqed Salem Alhamzi Abdulaziz Aloma Ziad Jarrahi Satam Al Suqan Waleed M. Alshe Wail Alshehri Fayez Ahmed Mohamed Atta Figure XI.6. Layered display of the geodesic distance between Atta and the other hijackers. it is important to distinguish centrality based on in-degree from centrality based on out-degree. If a person has a high in-degree, we say that this person is prominent and has high prestige. Many people seek direct connections to him or her, indicating that persons’s importance. People who have high out-degree are people who are able to interact with many others and possibly spread their ideas. Such people are said to be influential. In Table XI.2, we can see the hijackers sorted in decreasing order of their (undirected) degree measures. We can see that Mohamed Atta and Abdulaziz Alomari have the highest degree. XI.4.2 Closeness Centrality Degree centrality measures might be criticized because they take into account only the direct connections that an entity has rather than indirect connections to all other entities. One entity might be directly connected to a large number of entities that might be rather isolated from the network. Such an entity is central only in a local neighborhood of the network. To solve the shortcomings of the degree measure, we can utilize the closeness cen-trality. This measure is based on the calculation of the geodesic distance between the entity and all other entities in the network. We can either use directed or undirected geodesic distances between the entities. In our current example, we have decided to look at undirected connections. The sum of these geodesic distances for each entity 252 Link Analysis Table XI.2. All Degree Measures of the Hijackers Name Degree Mohamed Atta 11 Abdulaziz Alomari 11 Ziad Jarrahi 9 Fayez Ahmed 8 Waleed M. Alshehri 7 Wail Alshehri 7 Satam Al Suqami 7 Salem Alhamzi 7 Marwan Al-Shehhi 7 Majed Moqed 7 Khalid Al-Midhar 6 Hani Hanjour 6 Nawaq Alhamzi 5 Ahmed Alghamdi 5 Saeed Alghamdi 3 Mohald Alshehri 3 Hamza Alghamdi 3 Ahmed Alnami 1 Ahmed Alhaznawi 1 is the “farness” of the entity from all other entities. We can convert this into a mea-sure of closeness centrality by taking its reciprocal. We can normalize the closeness measure by dividing it by the closeness measure of the most central entity. Formally, let d(v1, v2) = the minimal distance between v1 and v2 – that is, the minimal number of vertices we need to pass on the way from v1 to v2. The closeness centrality of vertex vi is defined as Ci = |V|−1  j̸=i d(vi,v j). This is the reciprocal of the average geodesic distance between vi and any other vertex in the network. In Table XI.3, we can see the hijackers sorted in decreasing order of their closeness. XI.4.3 Betweeness Centrality Betweeness centrality measures the effectiveness in which a vertex connects the var-ious parts of the network. Entities that are on many geodesic paths between other pairs of entities are more powerful because they control the flow of information between the pairs. That is, the more other entities depend on a certain entity to make connections, the more power this entity has. If, however, two entities are con-nected by more than one geodesic path and a given entity is not on all of them, it loses some power. If we add up, for each entity, the proportion of times this entity is “between” other entities for transmission of information, we obtain the betwee-ness centrality of that entity. We can normalize this measure by dividing it by the maximum possible betweeness that an entity could have had (which is the number of possible pairs of entities for which the entity is on every geodesic between them = (|V|−1)(|V|−2) 2 ). XI.4 Centrality 253 Table XI.3. Closeness Measures of the Hijackers Name Closeness Abdulaziz Alomari 0.6 Ahmed Alghamdi 0.5454545 Ziad Jarrahi 0.5294118 Fayez Ahmed 0.5294118 Mohamed Atta 0.5142857 Majed Moqed 0.5142857 Salem Alhamzi 0.5142857 Hani Hanjour 0.5 Marwan Al Shehhi 0.4615385 Satam Al Suqami 0.4615385 Waleed M. Alshehri 0.4615385 Wail Alshehri 0.4615385 Hamza Alghamdi 0.45 Khalid Al Midhar 0.4390244 Mohald Alshehri 0.4390244 Nawaq Alhamzi 0.3673469 Saeed Alghamdi 0.3396226 Ahmed Alnami 0.2571429 Ahmed Alhaznawi 0.2571429 Formally, g j k = the number of geodetic paths that connect vj with vk; g j k(vi) = the number of geodetic paths that connect vj with vk and pass via vi. B i = j<k g j k(vi) g j k NB i = 2B i (|V| −1)(|V| −2) In Table XI.4, we can see the hijackers sorted in decreasing order of their between measures. XI.4.4 Eigenvector Centrality The main idea behind eigenvector centrality is that entities receiving many commu-nications from other well-connected entities will be better and more valuable sources of information and hence be considered central. The eigenvector centrality scores correspond to the values of the principal eigenvector of the adjacency matrix M. Formally,thevectorvsatisfiestheequationλv = Mv,whereλisthecorresponding eigenvalue and M is the adjacency matrix. The score of each vertex is proportional to the sum of the centralities of neighbor-ing vertices. Intuitively, vertices with high eigenvector centrality scores are connected to many other vertices with high scores, which are, in turn, connected to many other vertices, and this continues recursively. Clearly, the highest score will be obtained 254 Link Analysis Table XI.4. Betweeness Measures of the Hijackers Name Betweeness (Bi) Hamza Alghamdi 0.3059446 Saeed Alghamdi 0.2156863 Ahmed Alghamdi 0.210084 Abdulaziz Alomari 0.1848669 Mohald Alshehri 0.1350763 Mohamed Atta 0.1224783 Ziad Jarrahi 0.0807656 Fayez Ahmed 0.0686275 Majed Moqed 0.0483901 Salem Alhamzi 0.0483901 Hani Hanjour 0.0317955 Khalid Al-Midhar 0.0184832 Nawaq Alhamzi 0 Marwan Al-Shehhi 0 Satam Al Suqami 0 Waleed M. Alshehri 0 Wail Alshehri 0 Ahmed Alnami 0 Ahmed Alhaznawi 0 by vertices that are members of large cliques or large p-cliques. In Table XI.5 we can see that the members of the big clique (with eight members) are those that got the highest scores. Atta and Al-Shehhi got much higher scores than all the other hijackers mainly because the connection between them is so strong. They were also the pilots of the planes going into WTC1 and WTC2 and are believed to have been the leaders of the hijackers. XI.4.5 Power Centrality Power centrality was introduced by Bonacich. Given an adjacency matrix M, the power centrality of vertex i (denoted ci) is given by ci = j̸=i M i j(α + β · c j), where α is used to normalize the score (the normalization parameter is automatically selected so that the sum of squares of the vertices’s centralities is equal to the number of vertices in the network) and β is an attenuation factor that controls the effect that the power centralities of the neighboring vertices should have on the power centrality of the vertex. As in the eigenvector centrality, the power centrality of each vertex is determined by the centrality of the vertices it is connected to. By specifying positive or negative values to β, the user can control whether a vertex’s being connected to powerful vertices should have a positive effect on its score or a negative effect. The rationale for specifying a positive β is that, if you are connected to powerful colleagues it makes you more powerful. On the other hand, the rationale for a negative β is XI.4 Centrality 255 Table XI.5. Eigenvector Centrality Scores of the Hijackers Name E1 Mohamed Atta 0.518 Marwan Al-Shehhi 0.489 Abdulaziz Alomari 0.296 Ziad Jarrahi 0.246 Fayez Ahmed 0.246 Satam Al Suqami 0.241 Waleed M. Alshehri 0.241 Wail Alshehri 0.241 Salem Alhamzi 0.179 Majed Moqed 0.165 Hani Hanjour 0.151 Khalid Al-Midhar 0.114 Ahmed Alghamdi 0.085 Nawaq Alhamzi 0.064 Mohald Alshehri 0.054 Hamza Alghamdi 0.015 Saeed Alghamdi 0.002 Ahmed Alnami 0 Ahmed Alhaznawi 0 that powerful colleagues have many connections and hence are not controlled by you, whereas isolated colleagues have no other sources of information and hence are largely controlled by you. In Table XI.6, we can see the hijackers sorted in decreasing order of their power measure. XI.4.6 Network Centralization In addition to the individual vertex centralization measures, we can assign a number between 0 and 1 that will signal the whole network’s level of centralization. The network centralization measures are computed based on the centralization values of the network’s vertices; hence, we will have for each type of individual centralization measure an associated network centralization measure. A network structured like a circle will have a network centralization value of 0 (because all vertices have the same centralization value), whereas a network structured like a star will have a network centralization value of 1. We now provide some of the formulas for the different network centralization measures. Degree Degree∗(V) = Maxv∈VDegree(v) NETDegree =  v∈V Degree∗(V) −Degree(v) (n −1) ∗(n −2) 256 Link Analysis Table XI.6. Power Centrality for the Hijackers Graph Power : β = 0.99 Power : β = −0.99 Mohamed Atta 2.254 2.214 Marwan Al-Shehhi 2.121 0.969 Abdulaziz Alomari 1.296 1.494 Ziad Jarrahi 1.07 1.087 Fayez Ahmed 1.07 1.087 Satam Al Suqami 1.047 0.861 Waleed M. Alshehri 1.047 0.861 Wail Alshehri 1.047 0.861 Salem Alhamzi 0.795 1.153 Majed Moqed 0.73 1.029 Hani Hanjour 0.673 1.334 Khalid Al-Midhar 0.503 0.596 Ahmed Alghamdi 0.38 0.672 Nawaq Alhamzi 0.288 0.574 Mohald Alshehri 0.236 0.467 Hamza Alghamdi 0.07 0.566 Saeed Alghamdi 0.012 0.656 Ahmed Alnami 0.003 0.183 Ahmed Alhaznawi 0.003 0.183 Clearly, if we have a circle, all vertices have a degree of 2; hence, NETDegree = 0. If we have a star of n nodes (one node in the middle), then that node will have a degree of n−1, and all other nodes will have a degree of 1; hence, NETDegree =  v∈V\v∗(n −1) −1 (n −1)(n −2) = (n −1)(n −2) (n −1)(n −2) = 1. For the hijackers’ graph, NETDegree = 0.31 Betweenness NB∗(V) = Maxv∈VNB(v) NETBet =  v∈V NB∗(V) −NB(v) (n −1) For the hijackers’ network, NETBet = 0.24 XI.4.7 Summary Diagram Figure XI.7 presents a summary diagram of the different centrality measures as they are applied to the hijacker’s network. We marked by solid arrows the hijackers who got the highest value for the various centrality measures and by dashed arrows XI.5 Partitioning of Networks 257 Degree, Eigenvector, Power Eigenvector Power(+beta) Degree Degree, Closeness Power (−beta) Betweenness Betweenness Closeness Mohald Alshehri Ahmed Alhaznawi Ahmed Alnami Fayez Ahmed Ziad jarrahi Marwan Al-Shehhi Waleed M. Alshehri Wail Alshehri Satam Al Suqami Mohamad Atta Majed Moqed Hani Hanjour Nawaq Alhamzi Khalid Al-Alhamzi Salem Alhamzi Hamza Alghamdi Saeed Alghamdi Abdulezz Alomeri Figure XI.7. Summary diagram of centrality measures (solid arrows point to highest value; dashed arrows point to second largest (done using Netminer (Cyram 2004)). the runners-up. We can see for instance that Atta has the highest value for degree centrality, eigenvector centrality, and power centrality, whereas Alomari has the highest value for degree centrality (tied with Atta) and closeness centrality and is the runner-up for power centrality (with a negative beta). On the basis of our experience the most important centrality measures are power and eigenvector (which are typically in agreement). Closeness and, even more so, betweeness centrality signal the people who are crucial in securing fast communica-tion between the different parts of the network. XI.5 PARTITIONING OF NETWORKS Often we obtain networks that contain hundreds and even thousands of vertices. To analyze the network effectively it is crucial to partition it into smaller subgraphs. We present three methods below for taking a network and partitioning it into clusters. The first method is based on core computation, the second on classic graph algorithms for finding strong and weak components and biconnected components, and the third on block modeling. 258 Link Analysis Satam Al Suqami Ziad Jarrahi Fayez Ahmed Waleed M. Alshehri Wail Alshehri Marwan Al-Shehhi Mohamed Atta Majed Moqed Khalid Al-Midhar Salem Alhamzi Nawaq Alhamzi Hani Hanjour Ahmed Alghamdi Abdulaziz Alomari Mohald Alshehri Hamza Alghamdi Saeed Alghamdi 3 4 1 2 Ahmed Alnami Ahmed Alhaznawi Figure XI.8. Core partitioning of the hijackers’ graph. XI.5.1 Cores Definition: Let G = (V, E) be a graph. A subgraph S = (W, E | W) induced by the vertex set W is a k-core or a core of order k iff ∀n ∈W : degS(n) ≥k and S is a maximal with respect to this property. The main core is the core of highest order. The core number of vertex n is the highest order of a core that contains this vertex. Algorithm for finding the main core Given a graph G = (V, E), delete all vertices n and edges attached to them such that degS(n) < k and repeat until no vertices or edges can be deleted. The subgraph that remains after the iterative deletion is a core of order k. If an empty graph results, we know that no core of order k exists. We can perform a simple log |V| search for the order of the main core. After the main core is discovered, we can remove these vertices and the associated edges from the graph and search again for the next core in the reduced graph. The process will terminate when an empty graph is reached. In Figure XI.8, we can see the cores that were discovered in the hijacker’s graph. When a core was discovered, it was deleted from the graph and the search for the biggest core in the remaining graph started again. We can see that four cores were found. The main core contains eight nodes and is of order seven (each vertex is connected to all other seven vertices), the second largest core has six vertices in it and an order of 3, the third core has three vertices and an order of 1, and the fourth one has two vertices and an order of 1. We then used the shrinking option of Pajek (Operations | Shrink Network | Partition) to obtain a schematic view of the network based on the core partition. Each core is reduced to the name of its first member. For instance, the first member in the core marked 1 is Mohammed Atta, and hence the core is reduced to him. If there is at least one edge between the vertices of any two cores, then we will have an edge between the associated vertices in the reduced graph. The reduced graph, XI.5 Partitioning of Networks 259 # Nawaq Alhamzi Ahmed Alghamdi #Hamza Alghamdi #Saeed Alghamdi #Mohamed Atta Figure XI.9. Shrinking the hijackers’ graphs based on the core partition. which is based on the shrinking of the core partitioning, is shown in Figure XI.9. A layered display of the cores is shown in Figure XI.10. Alternatively,wecanusealayereddisplayofthenetworktoseethedifferentcores and the relations between them better. Each core is shown in a different y-level. This representation mainly enables us to focus on the intraconnections between the cores. Fayez Ahmed Wail Alshehri Waleed M. Alshehri Satam Al Suqami Ziad Jarrahi Abdulaziz Alomari Marwan Al-Shehhi Mohamed Atta Nawaq Alhamzi Khalid Al-Midhar Hani Hanjour Majed Moqed Salem Alhamzi Ahmed Alghamdi Mohald Alshehri Hamza Alghamdi Saeed Alghamdi Ahmed Alnami Ahmed Alhaznawi Figure XI.10. Layered display of the cores. 260 Link Analysis XI.5.2 Classic Graph Analysis Algorithms Another way of partitioning a network is to use classic graph algorithms such as weak and strong component analysis and identification of bidirectional components. Strong and Weak Components Whether the network is directed or undirected is crucial to the component analysis of the network. A subset of vertices is called a strongly connected component if there is at least one walk from any vertex to any other vertex in the subset. A subset of vertices is called a weakly connected component if there exists at least one chain from any vertex to any other vertex in the subset. A subset of vertices is called a biconnected component if there exist at least two chains from any vertex to any other vertex in the subset, where the chains share no common vertex. Biconnected Components and Articulation Points A vertex d of the network is an articulation point of the network if there exist two additional vertices b and c so that every chain between b and c also includes d. It follows that vertex d is an articulation point if the removal of d from the network dis-connects it. A network is termed biconnected if, for every triple of vertices d, b, and c, there is a chain between b and c that does not include d. This means that a bicon-nected network remain connected even after any vertex from it is removed. There are no articulation points in a biconnected network. Articulation points expose weak-nesses of networks, and elimination of articulation points will cause the network to be fragmented. The articulation points of the hijackers’ graph are shown in Figure XI.11. XI.5.3 Equivalence between Entities Given a network of entities, we are often interested in measuring the similarity between the entities based on their interaction with other entities in the network. This Saeed Alghamdi Hamza Alghamdi Ahmed Alghamdi Mohald Alshehri Ziad Jarrahi Fayez Ahmed Wail Alshehri Marwan Al-Shehhi Mohamed Atta Waleed M. Alshehri Abdulaziz Alomari Majed Moqed Hani Hanjour Nawaq Alhamzi Khalid Al-Midhar Salem Alhamzi Satam Al Suqami Ahmed Alnami Ahmed Alhaznawi Figure XI.11. Articulation points of the hijackers’ network (the number above the arrow signals the number of components that will result after removing the articulation point). XI.5 Partitioning of Networks 261 section formalizes this notion of similarity between entities and provides examples of how to find similar entities and how to use the similarity measure to cluster the entities. Structural Equivalence Two entities are said to be exactly structurally equivalent if they have the same relationships to all other entities. If A is “structurally equivalent” to B, then these two entities are “substitutable.” Typically, we will not be able to find entities that are exactly structurally equivalent; hence, we are interested in calculating the degree of structural equivalence between entities. This measure of distance makes it possible to perform hierarchical clustering of the entities in our network. We present two formal definitions for structural equivalence. Both are based on the connection vectors of each of the entities. The first definition is based on the Euclidian distance between the connection vectors and other on the number of exact matches between the elements of the vectors. EDis(V i, Vj) =  k (M i k −Mj k)2 Match(V i, Vj) = n k=1 eq(M i k, Mj k) n , where eq(a, b) =  1 a = b 0 otherwise Regular Equivalence Two entities are said to be regularly equivalent if they have an identical profile of connections with other entities that are also regularly equivalent. In order to establish regular equivalence, we need to classify the entities into semantic sets such that each set contains entities with a common role. An example would be the sets of surgeons, nurses, and anesthesiologists. Let us assume that each surgeon is related to a set of three nurses and one anesthesiologist. We say that two such surgeons are regularly equivalent (and so are the nurses and the anesthesiologist) – that is, they perform the same function in the network. Entities that are “structurally equivalent” are also “regularly equivalent.” How-ever, entities that are “regularly equivalent” do not have to be “structurally equiva-lent.” It is much easier to examine if two entities are structurally equivalent because there is a simple algorithm for finding EDis and Match. It is much harder to estab-lish if two entities are regularly equivalent because we need to create a taxonomy of semantic categories on top of the entities. In Figure XI.12 we can see two pairs of people and one triplet that are structurally equivalent. In Table XI.7 we can see the EDis computed for each pair of entities. Entities that are structurally equiva-lent will have an EDis of 0. For instance, Waleed M. Alshehri and Wail Alshehri are structurally equivalent, and hence their EDis is 0. Based on this table, we were able to use a hierarchical clustering algorithm (via the UCINET software package; see Section XI.7.2) and generate the dendogram shown in Figure XI.13. People who are very close in the dendogram are similar structurally (i.e, they have low EDis), whereas people who are far away in the dendogram are different structurally. 262 Link Analysis Nawaq Alhamzi Hani Hanjour Khalid Al-Midhar Salem Alhamzi Majed Moqed Ahmed Alghamdi Abdulaziz Alomari Mohamed Atta Satam Al Suqami Ziad Jarrahi Fayez Ahmed Mohald Alshehri Hamza Alghamdi Saeed Alghamdi Ahmed Alnami Ahmed Alhaznawi Wail Alshehri Marwan Al-Shehhi Waleed M. Alshehri 2 3 Figure XI.12. Structural equivalences in the hijackers’ graph. XI.5.4 Block Modeling Block modeling is an analysis technique for finding clusters of vertices that behave in a similar way. Block modeling is based on the notions of structural and regular equivalence between vertices and as such is far more sensitive to the interconnections between vertices than the standard clustering techniques introduced before. Block modeling was introduced by Borgatti and Everett (1993). The technique is fairly general and can use a variety of equivalence relations between the vertices. The 3 4 5 6 7 8 15 14 16 10 17 1 2 9 12 13 11 18 19 8.893 5.314 4.350 3.534 3.037 2.828 2.702 2.449 2.387 2.207 2.000 1.414 0.000 Mohamed Atta Marwan Al-Shehhi Hani Hanjour Majed Moqed Salem Alhamzi Abdulaziz Alomari Waleed M. Alshehri Satam Al Suqami Wail Alshehri Ziad Jarrahi Fayez Ahmed Nawaq Alhamzi Khalid Al-Midhar Ahmed Alghamdi Mohald Alshehri Saeed Alghamdi Hamza Alghamdi Ahmed Alnami Ahmed Alhaznawi Figure XI.13. Clustering-based structural equivalence between the hijackers (we can see that {15,14,16} as well as {10,17} and {18,19} are structural equivalence classes). Table XI.7. Euclidian Distance (Edis) between Each Pair of Entities 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 1 Nawaq Alhamzi 0.0 1.4 9.3 9.6 3.7 2.8 3.7 4.2 2.4 4.9 3.7 3.7 3.7 4.7 4.7 4.7 4.9 3.2 3.2 2 Khalid Al-Midhar 1.4 0.0 9.4 8.4 4.0 2.4 3.5 4.0 2.8 4.7 4.0 4.0 4.0 4.5 4.5 4.5 4.7 3.5 3.5 3 Mohamed Atta 9.3 9.4 0.0 2.4 9.8 9.7 10.2 7.5 9.4 7.6 9.8 9.4 9.8 7.5 7.5 7.5 7.6 9.6 9.6 4 Marwan Al-Shehhi 9.6 8.4 2.4 0.0 10.7 8.7 9.3 7.6 9.5 7.2 9.5 9.1 9.5 7.1 7.1 7.1 7.2 9.3 9.3 5 Hani Hanjour 3.7 4.0 9.8 10.7 0.0 3.2 2.0 5.3 4.0 6.8 6.0 6.3 6.3 6.6 6.6 6.6 6.8 6.0 6.0 6 Majed Moqed 2.8 2.4 9.7 8.7 3.2 0.0 1.4 4.2 3.2 5.3 4.7 5.1 5.1 5.1 5.1 5.1 5.3 4.7 4.7 7 Salem Alhamzi 3.7 3.5 10.2 9.3 2.0 1.4 0.0 4.9 4.0 6.2 5.7 6.0 6.0 6.0 6.0 6.0 6.2 5.7 5.7 8 Abdulaziz Alomari 4.2 4.0 7.5 7.6 5.3 4.2 4.9 0.0 4.0 3.2 4.9 4.5 5.3 2.8 2.8 2.8 3.2 4.9 4.9 9 Ahmed Alghamdi 2.4 2.8 9.4 9.5 4.0 3.2 4.0 4.0 0.0 4.7 3.5 3.5 3.5 4.5 4.5 4.5 4.7 3.5 3.5 10 Ziad Jarrahi 4.9 4.7 7.6 7.2 6.8 5.3 6.2 3.2 4.7 0.0 4.2 3.7 4.7 1.4 1.4 1.4 0.0 4.2 4.2 11 Hamza Alghamdi 3.7 4.0 9.8 9.5 6.0 4.7 5.7 4.9 3.5 4.2 0.0 2.8 2.8 4.5 4.5 4.5 4.2 2.0 2.0 12 Mohald Alshehri 3.7 4.0 9.4 9.1 6.3 5.1 6.0 4.5 3.5 3.7 2.8 0.0 2.8 3.5 3.5 3.5 3.7 2.8 2.8 13 Saeed Alghamdi 3.7 4.0 9.8 9.5 6.3 5.1 6.0 5.3 3.5 4.7 2.8 2.8 0.0 4.5 4.5 4.5 4.7 2.0 2.0 14 Satam Al Suqami 4.7 4.5 7.5 7.1 6.6 5.1 6.0 2.8 4.5 1.4 4.5 3.5 4.5 0.0 0.0 0.0 1.4 4.0 4.0 15 Waleed M. Alshehri 4.7 4.5 7.5 7.1 6.6 5.1 6.0 2.8 4.5 1.4 4.5 3.5 4.5 0.0 0.0 0.0 1.4 4.0 4.0 16 Wail Alshehri 4.7 4.5 7.5 7.1 6.6 5.1 6.0 2.8 4.5 1.4 4.5 3.5 4.5 0.0 0.0 0.0 1.4 4.0 4.0 17 Fayez Ahmed 4.9 4.7 7.6 7.2 6.8 5.3 6.2 3.2 4.7 0.0 4.2 3.7 4.7 1.4 1.4 1.4 0.0 4.2 4.2 18 Ahmed Alnami 3.2 3.5 9.6 9.3 6.0 4.7 5.7 4.9 3.5 4.2 2.0 2.8 2.0 4.0 4.0 4.0 4.2 0.0 0.0 19 Ahmed Alhaznawi 3.2 3.5 9.6 9.3 6.0 4.7 5.7 4.9 3.5 4.2 2.0 2.8 2.0 4.0 4.0 4.0 4.2 0.0 0.0 263 264 Link Analysis general block modeling problem is composed of two subproblems: 1. Performing clustering of the vertices; each cluster serves as a block. 2. Calculating the links (and their associated value) between the blocks. Formal Notations Given two clusters C1 and C2, L(C1, C2) is the set of edges that connect vertices in C1 to vertices in C2. Formally, L(C1, C2) = {(x, y)|(x, y) ∈E, x ∈C1, y ∈C2}. Because there are many ways to partition our vertices into clusters, we will intro-duce an optimization criterion that will help pick the optimal clustering scheme. Before defining the problem formally, we will introduce a few predicates on the connections between two clusters. Visualizations of some of these predicates are shown in Figure XI.14 Predicate name Formula and Acronym Explanation Null Null(C 1, C 2) ≡∀x ∈C 1, ∀y ∈ C 2, (x, y) / ∈E No connection at all between the clusters Com (Complete) Com(C 1, C 2) ≡∀x ∈C 1, ∀y(y ̸= x) ∈ C 2, (x, y) ∈E Full connection between the clusters Row Regular Rreg(C 1, C 2) ≡∀x ∈C 1, ∃y ∈ C 2, (x, y) ∈E Each vertex in the first cluster is connected to at least one vertex in the second cluster. Column Regular Creg(C 1, C 2) ≡∀y ∈C 2, ∃x ∈ C 1, (x, y) ∈E Each vertex in the second cluster is connected to at least one vertex in the first cluster. Regular Reg(C 1, C 2) ≡ Rreg(C 1, C 2) ∧Creg(C 1, C 2) All vertices in both clusters must have at least one vertex in the other cluster to which they are connected. Row Dominant Rdom(C 1, C 2) ≡∃x ∈C 1, ∀y(y ̸= x) ∈ C 2, (x, y) ∈E There is at least one vertex in the first cluster that is connected to all the vertices in the second cluster. Column Dominant Cdom(C 1, C 2) ≡∃y ∈C 2, ∀x(x ̸= y) ∈ C 1, (x, y) ∈E There is at least one vertex in the second cluster that is connected to all the vertices in the first cluster. Row Functional Rfun(C 1, C 2) ≡∀y ∈C 2, ∃single x ∈ C 1, (x, y) ∈E All vertices in the second cluster are connected to exactly one vertex in the first cluster. Column Functional Cfun(C 1, C 2) ≡∀x ∈C 1, ∃single y ∈ C 2, (x, y) ∈E All vertices in the first cluster are connected to exactly one vertex in the second cluster. Formally, a block model of graph G = (V, E) is a tuple M = (U, K, T, Q, π, α), where ■U is the set of clusters that we get by partitioning V. ■K is the set of connections between elements of U, K ⊆U × U. XI.5 Partitioning of Networks 265 Regular Column Dominant Null Complete Row Regular Column Regular Figure XI.14. Visualization of some of the predicates on the connections between clusters. ■T is a set of predicates that describe the connections between the clusters. ■π is a mapping function between the cluster’s connections and the predicates − π : K →T\{Null}. ■Q is a set of averaging rules enabling us to compute the strength of the connection between any two clusters. ■α is a mapping function from the connection between the clusters to the averaging rules – α : K →Q Averaging rules (Q) Listed below are a few options for giving a value to a connection between two clusters C1 and C2 based on the weights assigned to edges in L(C1,C2). Ave(C1, C2) =  e∈L(C1,C2) w(e) |L(C1, C2)| Max(C1, C2) = maxe∈L(C1,C2) w(e) Med(C1, C2) = mediane∈L(C1,C2) w(e) Ave −row(C1, C2) =  e∈L(C1,C2) w(e) |C1| Ave −col(C1, C2) =  e∈L(C1,C2) w(e) |C2| Finding the Best Block Model We can define a quality measure for any clustering and on the basis of that measure seek the clustering that will yield the ultimate block model of the network. First, we compute the quality of any clustering of the vertices. We start with a fundamental problem. Given two clusters C1 and C2 and a predi-cate t ∈T, how can we find the deviation of L(C1, C2) that satisfies t? This deviation will be denoted by δ(C1, C2, t). The approach here is to measure the number of 1’s missing in the matrix C1 × C2 from a perfect matrix that satisfies t. Clearly, δ(C1, C2, t) = 0 iff t(C1, C2) is true. For example, if the matrix that represents L(C1, C2) is 1 0 1 1 0 1 1 0 1 1 1 1 1 1 0 1, then, because there are four 0’s in the matrix, δ(C1, C2, Com) = 4. 266 Link Analysis If we assign some weight to each predicate t, we can introduce the notion of error with respect to two clusters and a predicate ε(C1, C2, t) = w(t) · δ(C1, C2, t). This notion can now be extended to the error over a set of predicates. We seek the minimal error from all individual errors on the members of the predicate set. This will also determine which predicate should selected to be the value of π(C1, C2). ε(C1, C2, T ) = min t∈T ε(C1, C2, t) π(C1, C2, T ) = arg min t∈T ε(C1, C2, t) Now that the error for a pair of clusters has been defined, we can define the total error for the complete clustering. Basically, it will be the sum of the errors on all pairs of clusters as expressed by P(U, T ) = C1∈U,C2∈U ε(C1,C2, T ). If, for a given U, P(U, T) = 0, we can say that U is a perfect block model of the graph G = (V, E ) with respect to T. In most cases, it will not be possible to find a perfect block model; hence, we will try to find the clustering U′ that minimizes the total error over all possible clustering of V. If T = {Null, Com} we are seeking a structural block model (Lorrain and White 1971), and if T = {Null, Reg} we are seeking a regular block model (White and Reitz 1983). Block Modeling of the Hijacker Network We present two experiments with the hijacker network. In both experiments we seek a structural block model. The objective of the first experiment is to obtain four blocks (mainly because there were four flights). Using Pajek to do the modeling, we obtain the following connection matrix between the blocks (shown in Figure XI.15): Final predicate matrix for the block modeling of Figure XI.15 1 2 3 4 1 Com – – – 2 null com – – 3 Com com null – 4 null null null Null We can see that only four almost complete connections were identified (after removing the symmetric entries). Two of them are the clique of cluster 2 and the almost clique of cluster 1. In addition, we have almost a complete connection between clusters 1 and 3 and between clusters 2 and 3. All other connections between clus-ters are closer to satisfying the null predicate than they are to satisfying the com predicate. Ziad Jarrahi Satam Al Suqami Wail Alshehri Marwan Al-Shehhi Waleed M. Alshehri Fayez Ahmed Abdulaziz Alomari Mohamed Atta Khalid Al-Midhar Salem Alhamzi Majed Moqed Hani Hanjour Nawaq Alhamzi Ahmed Alghamdi Hamza Alghamdi Saeed Alghamdi Ahmed Alnami Ahmed Alhaznawi 2 1 3 Mohald Alshehri 4 Figure XI.15. Block modeling with four blocks. 267 268 Link Analysis Abdulaziz Alomari 3 #Nawaq Alhamzi 1 #Hamza Alghamdi 4 # Mohamed Atta 2 Figure XI.16. Shrinking the network based on the 4 blocks of 15. The final error matrix is shown below; we can see that cluster 2 is a complete clique because its error is 0, whereas we can see that the connection between clusters 3 and 1 is not complete because three connections are missing – namely, between Abdulaziz Alomari and any of {Khalid Al-Midhar, Majed Moqed, and Nawaq Alhamzi}. The total error is 16. In order to see a schematic view of the network, we shrank the clusters into single nodes. If there was at least one connection between the clusters, we will see a line between the cluster’s representatives. The name selected for each cluster is the name of the first member of the cluster (alphabetically based on last name, first name). The shrunk network is shown in Figure XI.16. Final error matrix for the block modeling of Figure XI.16 1 2 3 4 1 4 – – – 2 3 0 – – 3 2 0 0 – 4 1 2 0 4 The objective of the second experiment is to see how the clustering and associated error cost changes when we set a higher number of target clusters. We run the block modeling of Pajek again specifying that we want to obtain six blocks or clusters. In this case the total error dropped to 9. The six blocks are shown in Figure XI.14 and then we show the predicate matrix of the block modeling and the final error matrix. We can see that five of the six blocks are close to a complete block (clique), whereas there are only three connections between the blocks. Ziad Jarrahi Satam Al Suqami Wail Alshehri Marwan Al-Shehhi Waleed M. Alshehri Fayez Ahmed Abdulaziz Alomari Mohamed Atta Khalid Al-Midhar Salem Alhamzi Majed Moqed Hani Hanjour Nawaq Alhamzi Ahmed Alghamdi Hamza Alghamdi Saeed Alghamdi Ahmed Alnami Ahmed Alhaznawi 2 4 6 5 1 3 Mohald Alshehri Figure XI.17. Block modeling with six blocks. 269 270 Link Analysis Here are the final predicate matrix and error matrix for the block modeling of Figure XI.17 1 2 3 4 5 6 1 com – – – – – 2 null Null – – – – 3 null Null com – – – 4 com Com Null com – – 5 com Null Null null Com – 6 null Null Null null Null Com 1 2 3 4 5 6 1 0 – – – – – 2 0 0 – – – – 3 1 0 1 – – – 4 0 0 0 0 – – 5 2 0 1 0 0 – 6 0 1 1 2 0 0 XI.6 PATTERN MATCHING IN NETWORKS OftenwehaveapatternexpressedasasmallgraphPandwewanttoseeifitispossible to find a subgraph of G that will match P. This problem may arise, for instance, when we want to see if an instance of a given scenario can be found in a large network. The scenario would be expressed as a small graph containing a small number of vertices with specific relations that connect them. We then want to see if instances of the scenario can be found within our network. An example of such a pattern is shown in Figure XI.18. We have specified a pattern of one person who is connected only to three other people who have no connections between themselves. We can find three subgraphs within the hijackers’ graph that contain a vertex connected to only three other vertices (marked 1, 2, and 3 in the figure); however, only 1 and 2 fully match the pattern. Subgraph 3 does not match the pattern because Fayez Ahmed and Ziad Jarrahi are connected. The na¨ ıve algorithm for finding exact matches of the pattern is based on simple backtracking – that is, if a mismatch is found the algorithm backtracks to the most recent junction in the graph visited before the failure. We can also search for approximate matches using techniques such as edit distances to find subgraphs that are similar to the pattern at hand. One of the most common patterns to be searched in a graph is some form of a directed graph that involves three vertices and some arcs connecting the vertices. This form of pattern is called a triad, and there are 16 different types of triads. One of them is the empty triad, in which there are no arcs at all, and another one is the full triad in which six arcs connect every possible pair of vertices in the triad. XI.7 Software Packages for Link Analysis 271 Hamza Alghamdi Mohald Alshehri Ahmed Alghamdi Abdulaziz Alomari Fayez Ahmed Ziad Jarrahi Satam Al Suqami Wail Alshehri Waleed M. Alshehri Marwan Al-Shehhi Mohamed Atta Khalid Al-Midhar Nawaq Alhamzi Salem Alhamzi Hani Hanjour Majed Moqed Saeed Alghamdi Ahmed Alhaznawi Ahmed Alnami 2 1 3 Pattern Figure XI.18. Pattern matching in the hijackers’ graph. XI.7 SOFTWARE PACKAGES FOR LINK ANALYSIS There are several packages for performing link analysis in networks. Some are fairly expensive and hence are probably out of reach for the causal user. We describe here three packages that are either totally free or relatively inexpensive. XI.7.1 Pajek Pajek is a freeware developed by the University of Ljubljana that can handle net-works containing hundreds of thousands of vertices. Pajek expects to get the input networks in a proprietary format, which includes the list of vertices and then lists of arcs (directed) and edges (undirected) between the vertices. There are programs that enable converting a simple set of binary connections to the Pajek (.net) format. Pajek supports a very large number of operations on networks, including centrality com-putations, path finding, component analysis, clustering, block modeling, and many other operations. In addition it includes a built-in drawing module that incorporates most the layout algorithms described in this chapter. Pajek can be downloaded from . The converters can be downloaded from and . XI.7.2 UCINET UCINET is a fairly robust network analysis package. It is not free, but even for nonacademics it costs less than 300 dollars. It covers all the operations described in this chapter, including centrality measures (with a larger variety of options than 272 Link Analysis Pajek), clustering, path finding, and component analysis. UCINET can export and import Pajek files. Netdraw is the visualization package of UCINET. UCINET and Netdraw can be downloaded from < com/download products.htm>. XI.7.3 NetMiner NetMiner is the most comprehensive package of the three, but it is also the most expensive.Theprofessionalversioncostsalittlelessthan1,000dollarsforcommercial use. The package offers all the operations included in UCINET and Pajek and is fairly intuitive to use. NetMiner can be downloaded from . XI.8 CITATIONS AND NOTES Section XI.1 For a great introduction to graph algorithms, please refer to Aho, Hopcroft, and Ullman (1983). For in-depth coverage of the area of social network analysis, see Wasserman and Faust (1994) and Scott (2000). Section XI.2 Force-based graph drawing algorithms are described in Kamada and Kawai (1989) and Fruchterman and Reingold (1991). Algorithms for drawing large graphs are addressed in Davidson and Harel (1996), Harel and Koren (2000), and Hadany and Harel (2001). Section XI.4 The degree centrality was introduced in Freeman (1979). The betweenness central-ity measure is due to Freeman (1977, 1979). The closeness centrality measure was introduced in Sabidussi (1966). The power centrality is due to Bonacich (1987). The eigenvector centrality originates from Bonacich (1972). Good descriptions of basic graph algorithms can be found in Aho et al. (1983). Cores have been introduced in Seidman (1983). Section XI.5 The notions of structural equivalence and regular equivalence were introduced in Lorrain and White (1971) and further expanded in Batagalj, Doreian, and Ferligoi (1992) and Borgatti and Everett (1993). Block modeling was introduced in Borgatti and Everett (1992) and Hummon and Carley (1993). The implementation of block modeling in Pajek is described in Batagelj (1997) and De Nooy, Mrvar, and Batageli (2004). Section XI.6 The notion of edit distance between graphs as vehicles for finding patterns in graphs is described in Zhang, Wang, and Shasha (1995). Finding approximate matches in undirected graphs is discussed in Wang et al. (2002). XII Text Mining Applications Many text mining systems introduced in the late 1990s were developed by com-puter scientists as part of academic “pure research” projects aimed at exploring the capabilities and performance of the various technical components making up these systems. Most current text mining systems, however – whether developed by aca-demic researchers, commercial software developers, or in-house corporate program-mers – are built to focus on specialized applications that answer questions peculiar to a given problem space or industry need. Obviously, such specialized text min-ing systems are especially well suited to solving problems in academic or commer-cial activities in which large volumes of textual data must be analyzed in making decisions. Three areas of analytical inquiry have proven particularly fertile ground for text mining applications. In various areas of corporate finance, bankers, analysts, and con-sultants have begun leveraging text mining capabilities to sift through vast amounts of textual data with the aims of creating usable forms of business intelligence, noting trends, identifying correlations, and researching references to specific transactions, corporate entities, or persons. In patent research, specialists across industry verticals at some of the world’s largest companies and professional services firms apply text mining approaches to investigating patent development strategies and finding ways to exploit existing corporate patent assets better. In life sciences, researchers are exploring enormous collections of biomedical research reports to identify complex patterns of interactivities between proteins. This chapter discusses prototypical text mining solutions adapted for use in each of these three problem spaces. Corporate intelligence and protein interaction analysis applications are useful as examples of software platforms widely applicable to various problems within very specific industry verticals. On the other hand, a patent research application is an example of a single, narrowly focused text mining application that can be used by specialists in corporations across a wide array of different indus-try verticals such as manufacturing, biotechnology, semiconductors, pharmaceuti-cals, materials sciences, chemicals, and other industries as well as patent profession-als in law firms, consultancies, engineering companies, and even some government agencies. 273 274 Text Mining Applications The discussions of applications in this chapter intentionally emphasize those ele-mentsofatextminingsystemthathavethegreatestimpactonuseractivities,although some broader architectural and functional points will at least be peripherally consid-ered. This emphasis is chosen partly because many text mining applications, by their very nature, build on generic text mining components (e.g., preprocessing routines, search algorithms) and create application specificity by means of customizing search refinement and user-interface elements in ways that are more oriented toward spe-cialized user activities with particular problem space emphases. This approach also serves to permit discussion of how some example text mining applications tend to “look and feel” in the real world to users. This chapter first discusses some general considerations before exploring in detail a business intelligence application aimed at addressing corporate finance questions. Discovery and exploration of biological pathways information and patent search are more briefly treated. XII.1 GENERAL CONSIDERATIONS The three text mining applications examined in this chapter exhibit a fair amount of commonality in terms of basic architecture and functionality – especially with respect to the preprocessing operations and core text mining query capabilities on which they depend. However, the systems differ markedly in their implementations of background knowledge, their preset queries, and their visualization functionality as well as specifics of the content they address. XII.1.1 Background Knowledge Background knowledge, preset queries, and visualization capabilities are the areas in which custom text mining applications are most commonly oriented toward the particularities of a specific problem space. A discussion of general considerations germane to these three areas is useful in considering how text mining applications are crafted – especially when they are crafted from – or “on top of” – components derived from more generic text mining systems. XII.1.2 Generalized Background Knowledge versus Specialized Background Knowledge As has already been discussed in Chapter II, background knowledge can play many different useful roles in the architecture of text mining systems. However, beyond the question of how background knowledge is architecturally integrated into a system, questions of what constitutes the content of the background knowledge most often relate to the nature of that system’s application. Indeed, many text mining applications rely on both generalized and specialized background knowledge. As the name implies, generalized background knowledge derives from general-information source materials that are broadly useful within a single language. Generalized background knowledge tends to involve background knowledge from very broadly applicable knowledge domains. Generalized background knowledge frequently comes in the form of taxonomies, lexicons, and whole ontologies derived from widely useful knowledge sources. Such XII.1 General Considerations 275 sources can be as formalized as the WordNet ontology or as informal as simpler taxonomies or lexicons based on general-use knowledge sources such as commer-cial dictionaries, encyclopedias, fact books, or thesauri. The rise of various aids to ontology creation and translation, including the DARPA Agent Markup Language (DAML) and Ontology Web Language (OWL), has increased the availability and number of such generalized background knowledge sources. Specializedbackgroundknowledgeoriginatesfromknowledgesourcesthatrelate more specifically to a particular problem area or sphere of activity. Such knowledge need not come from overly complex ontological source materials. For instance, many text mining applications aimed at solving problems in the life sciences make use of partial or whole listings of terms and term relationships from the National Library of Medicine’s controlled vocabulary, MeSH, to create tax-onomies or refinement constraints useful and consistent with document collections populated by MEDLINE/PubMed documents. However, text mining applications can also incorporate more comprehensive background knowledge by integrating elements of various public domain or commercial formal ontologies; examples of such sources include the GO Consortium’s ontologies and ontologies developed by companies such as Reed Elsevier or BioWisdom. Even the most general-purpose text mining applications can usually benefit from generalized background knowledge, but text mining applications aimed at niche activities in particular benefit from the inclusion of specialized background knowl-edge. Text mining applications may implement both types of background knowl-edge or may meet application needs modestly with information from only one type of background knowledge source. Some text mining applications implement spe-cialized background knowledge from diverse multiple domains. For instance, a text mining application aimed at investigating patents in the automotive industry might benefit from specialized background knowledge related to patent-specific activities as well as topics in the automotive industry. Figure XII.1 illustrates how a taxonomy of corporate information can help provide context and structure to the browsing of distributions. Figure XII.2 shows how an interactive visualization graph can be made more relevant to an industry specialist by leverage background knowledge in the form of a controlled vocabulary. Designers of text mining systems need to carefully weigh the real benefits of including various types of background in their applications. Including too much back-ground knowledge can have negative impacts on system performance, maintenance, and usability. Using multiple sources of background knowledge in any text mining application can increase the likelihood of introducing inconsistencies in terms of how data are categorized or defined and, as a consequence, increase the mainte-nance work required to make the background knowledge consistent. Also, larger amounts of background knowledge – even if internally consistent – can make using a text mining application more cumbersome for the user. For instance, an application oriented toward exploring a collection of proteomics research documents might have available a listing of the chemical terms pertinent to proteomics as elements of a query or refinement constraint. If, However, one were to include a comprehensive, controlled vocabulary of terms useful across all of the various academic disciplines concerned with chemical compounds in this proteomics-oriented application’s background knowledgebase, users might be forced to navigate much larger hierarchical trees of less relevant concepts when choosing entities for 276 Text Mining Applications Figure XII.1. A distribution browser that makes use of a taxonomy based on specialized background knowledge. (From Feldman, Fresko, Hirsh, et al. 1998.) queries. Similarly, these users might encounter endlessly scrolling pull-down boxes when attempting to create refinement conditions. Both of these circumstances would limit the intuitiveness and speed of knowledge discovery activities for users interested only in topics pertinent to proteomics research. Finally, inclusion of larger specialized background knowledge data, in particular, can lead to much more labor-intensive and difficult data pruning requirements over time. Complex background knowledge maintenance requirements may also intro-duce additional overhead in terms of application GUI screens devoted to optimizing maintenance activities. As a result, there is significant incentive for designers to adopt a “best overall bang for the buck” approach to employing background knowledge in their applications. XII.1.3 Leveraging Preset Queries and Constraints in Generalized Browsing Interfaces In addition to leveraging the power of specialized background knowledge, a text mining system can gain a great deal of de facto domain customization by offering users lists of prespecified queries and search refinement constraints meaningful to the problem space they are interested in investigating. With respect to queries, preset or “canned” query lists commonly make use of two simple approaches to providing helpful queries to speed along knowledge discovery in a given application domain. CD44 antigen Protein kinase. camp - dependent, regulatory mineralocortic acid receptor (aldosterone recept... tumor protein p53 novel protein Parvalburmin insulin angelman syndrome chromosome region nuts, c. cob, borndog of, 2 postereiotic regression increased 2 junction plakoglobin Tumer rejection antigen (gp96) pscudegene 2 v - myc avian mycl-ytermatosis viral onrogene h... creg plg breast cancer 1, early conset breast cancer 2, early conset FUS2 dickkopf (scenopus laevis) homolog 1 HEp G2 transcription factor cathepsin d(lysoscmal aspartyl protease) astrogen - inchcible protein ps2 peanut - like 2 Prolactin stress - associated enoplastic reticulum prote... red - B nudix (nucleoside dirphosphere linked moiety x... Figure XII.2. Spring graph of concepts (informed by the MeSH-controlled vocabulary). (From Feldman, Fresko, Hirsh, et al. 1998.) 277 278 Text Mining Applications Figure XII.3. A constraint filter interface for exploring association rules that leverages corpo-rate M&A background knowledge. First, the types of queries typical to a given domain can be made available in a cus-tom GUI. For instance, if it is frequently true that knowledge workers in proteomics are looking for associations between proteins, a text mining application aimed at pro-teomics researchers could use an association-oriented query construction GUI as the default query interface. This query construction interface can be supplemented to enable rapid creation of association rule-type queries with a pick list of prespecified queries to allow quick interrogation of the problem space. Second, text mining applications often make use of such “canned query” pick lists to help create templates with which to populate new queries similar to those common for the domain. Query templates can be enhanced with pull-downs to help support filling in various entities and parameters in the query to speed along query construction. Often, grouping and labeling of preset queries can greatly improve the ease of query construction and execution. Compared with generic query construction inter-faces like those illustrated in Section II.5, which provide great flexibility but force a user to think through each choice of constraint parameters and query variables (such as entities or events), well-organized and identified picklists of queries appro-priate for a problem space trade flexibility for speed and ease of use during query construction. With regard to query constraints, specialized background knowledge can be used not only to help create a consistent, domain-specific nomenclature for concepts found among documents in a document collection and useful taxonomies in which to place concepts but also to facilitate the use of postquery refinement constraints relevant to the applications aims (see Figure XII.3). For instance, instead of having all concepts XII.2 Corporate Finance 279 from a domain within the refinement lists, “pruned” lists of taxonomical groupings or entities useful as parameters can be used to populate pulldowns of constraints that are meaningful to specialists. Designers of text mining applications can also preset variables appropriate to the text mining application’s realistic universe of potentially useful constraints. Doing so provides “assisted” constraint creation more relevant to the problem space addressed by the application. XII.1.4 Specialized Visualization Approaches As mentioned in Section X.1, visualization approaches demonstrate strengths and weaknesses with respect to graphing different types of data. This is a key consid-eration in determining the types of visualization approaches useful for a given text mining application. Providing circle graphs to investment bankers interested in track-ing trends in corporate events over time might not stimulate much exploration of these trends by users, whereas providing for the quick generation of histograms of corporate names mentioned in articles for a given period might prove very useful to this same group of investment bankers when tracking corporate activity and press coverage. One important consideration for developers of text mining applications when considering the best use of visualization methodologies is the integration of special-ized background knowledge in the generation of graphs. Just as specialized back-ground knowledge can be used to inform domain-specific constraints, specialized background knowledge can also be used to help format the information presented in graphs to make them more relevant to the problem space they model. For instance, in assigning colors to the elements of a visualization, a text mining application can offer a GUI a palette of colors associated with concept names derived from a specialized background knowledge lexicon. Alternatively, a visualization GUI itself can contain a slider widget that allows constraint filters to switch between values that come from prespecified thresholds relevant to the text mining application’s problem space. XII.1.5 Citations and Notes Some general introductory materials useful to gaining perspective on text mining applications include Hearst (1999); Nasukawa and Nagano (2001); and Varadarajan, Kasravi, and Feldman (2002). Information resources on the DARPA DAML program can be found at <www.daml.org>. Resources on MeSH and UMLS are available from the United States National Library of Medicine Medical Subject Headings Web site XII.2 CORPORATE FINANCE: MINING INDUSTRY LITERATURE FOR BUSINESS INTELLIGENCE Text mining approaches lend themselves to many of the business intelligence tasks performed in corporate finance. Text mining tools are particularly well suited to automating, augmenting, and transforming business intelligence activities more 280 Text Mining Applications traditionally accomplished by means of labor-intensive, manual reviews of industry literature for patterns of information. These manual reviews typically entail sifting through vast amounts of textual data relating to companies, corporate executives, products, financial transactions, and industry trends. In the past, such reviews of industry literature have been performed by large cadres of analysts in investment banks, corporate development departments, man-agement consultancies, research think tanks, and other organizations that now face continuing pressure to streamline operational costs while increasing the compre-hensiveness and quality of their analytical work. Employing text mining applications customized for use in business intelligence tasks can dramatically improve the speed, exhaustiveness, and quality of such reviews. As a result, business intelligence systems based on text mining methodologies are fast becoming a critical part of many corpo-rate analysts’ professional tool chests. This section describes a system we will call Industry Analyzer – a simple example of a business intelligence application based on many of the technical approaches discussed throughout this book. The example is purposely meant to be a simple one, using only a small data collection, very simple background knowledge support, and no link detection functionality, with an emphasis on a high-level, user-oriented view of the application. Specifically, Industry Analyzer is an application developed to allow banking ana-lysts – as well as their peers in corporate development and M&A groups – to explore industry information about companies, people, products, and events (transactions, corporate actions, financial reporting announcements) in a given industry vertical. The implementation example has been configured to support knowledge discovery in news stories about the life sciences business sector. The life sciences business sector – which includes a number of constituent indus-tries such as pharmaceuticals, biotechnology, health care provisioning, and so on – is a complex sector for small industry groups to follow given the thousands of com-panies developing and selling tens of thousands of major products to hundreds of millions of people in the United States alone. Business analysts of the life sciences sector need to sift through vast numbers of references in news stories quickly to find information relevant to their concerns: how well a company or a product is doing, what a company’s strategic partners or competitors are, which companies have announced corporate actions, what products company managers are pushing in interviews, which companies’ products are getting the most coverage, and so on. Some analysts seek information suggesting potential merger or acquisition pairings, for much investment banking business comes from being an advisor on such trans-actions. Others look for smaller life sciences companies that show signs of one day going public, whereas still others seek news that might indicate a company may be open to divesting itself of a product or division and entering either an asset sale or structuring a spin-out. Industry Analyzer assists in allowing industry analysts to comb large amounts of trade information in their daily work more effectively. In a straightforward way, it facilitates the creation of simple searches and, perhaps most importantly, supports visualizations that help analysts better digest and interact with information collected from numerous individual, industry-specific news stories. The implementation example of Industry Analyzer presented here has a narrow focus on biomedical companies involved in cancer research and treatment. This is XII.2 Corporate Finance 281 Preprocessing Routines Term Extraction, Information Extraction Processed Document Collection BioWorld Documents Core Mining Operations Pattern Discovery Algorithms Analyst Background Knowledge NCI Metathesaurus GUI Browsing, Query Construction, Visualization Refinement Filters Figure XII.4. Industry Analyzer functional architecture. not inconsistent with the types of implementations encountered in reasonable, real-world scenarios, for banking analysts typically specialize in exploring information related to particular niches within an overall industry vertical. XII.2.1 Industry Analyzer: Basic Architecture and Functionality Industry Analyzer follows the rough architectural outline for a text mining system illustrated in Section II.2. Because it is not a complex system, it exhibits a relatively simple functional architecture (see Figure XII.4). Other than its content, background knowledge sources, and some of its presenta-tion layer elements, the Industry Analyzer is built around quite generic preprocessing and core mining components of the type discussed at length in Chapters II, III, IV, V, VI, VII, and VIII. The application can be described in terms of the main components that make up its functional architecture. Data and Background Knowledge Sources The raw data source for Industry Analyzer’s document collection is a group of 124 news articles from BioWorld, an industry publication that reports news relating to M&A activities in the life sciences business sector. The articles had a particular focus on biotech and pharmaceutical companies and their products. The articles were collected from a period stretching from 11 October 2004 to 17 November 2004. The following is a typical text example from these BioWorld articles: Biogen Idec Inc. ended its third quarter with $543 million in revenues, slightly lower than analyst estimates, as it nears the one-year anniversary of a merger that made it the world’s largest biotech company. The Cambridge, Mass.-based companyreported non-GAAPearnings per share of37centsandnetincomeof$132million,comparedwith35centsand$123million for the quarter last year. Analysts consensus estimate for the quarter was 35 cents. Industry Analyzer utilizes a very simple specialized background knowledge implementation primarily consisting of taxonomies of drug names, genes, and 282 Text Mining Applications Figure XII.5. The taxonomy used by Industry Analyzer supplemented by background knowl-edge. (Courtesy of the ClearForest Corporation.) diseases taken from the National Cancer Institute’s NCI Thesaurus, which is in part based on the National Library of Medicine’s Unified Medical Language Sys-tem (UMLS) Metathesaurus. The taxonomies can be leveraged with little modifica-tion from the NCI Thesaurus’s formalized hierarchical nomenclature and positioned within an overarching taxonomy of entities to support all of Industry Analyzer’s functional requirements. The NCI Thesaurus is a generally useful background knowledge source for build-ing text mining application taxonomies, for although it probably cannot be described as a full-blown ontology it nevertheless includes true IS A-type hierarchies. These hierarchies detail semantic relationships among drugs, genes, diseases, chemicals, organisms, anatomy, and proteins for thousands of defined domain concepts. The implementation of Industry Analyzer presented in this example also includes some less formal background knowledge for corporate organizations, locations, and industry concepts culled from various online sources. An example of an Industry Analyzer GUI for choosing entities based on a hierarchy informed by background knowledge can be seen in Figure XII.5. Preprocessing Operations Industry Analyzer uses a simple regimen of preprocessing operations to prepare the application’s processed document collection. The BioWorld documents are first subjected to a series of term-extraction methodologies like those described in Section III.4. This involves the labeling of each document (i.e., BioWorld article) with a set of terms extracted from the document. XII.2 Corporate Finance 283 Biogen Idec Inc. ended its third quarter with $543 million in revenues, slightly lower than analyst estimates, as it nears the one-year anniversary of a merger that made it the world’s largest biotech company. The Cambridge, Mass.-based company reported non-GAAP earnings per share of 37 cents and net income of $132 million, compared with 35 cents and $123 million for the quarter last year. Analysts consensus estimate for the quarter was 35 cents. Figure XII.6. Example of output from Industry Analyzer’s term extraction process. (Courtesy of BioWorld.) Initially, standard linguistic processing routines are run against each document, performing various tokenization, POS-tagging, and lemmatization operations with the aid of an external lexicon and a limited amount of manually tagged data for training. Then, lists of term candidates are generated for each document, after which filteringroutinesareruntocreateafinallistoftermswithwhichtotageachdocument. In addition, a date stamp is added to each document based on the publication date of the article. Industry Analyzer’s term extraction processes actually save the output and all the tags generated by the preprocessing routines in an XML-based, tagged file format. A highly simplified version of the output from the term extraction process can be seen in Figure XII.6. Feature extraction at the term level is important for an application like Industry Analyzer because word-level or even more granular feature-level extraction would miss or misinterpret many of the multiword terms-of-art used in both corporate finance and the life sciences. A simple example can be seen in Figure XII.6 in which the term Biogen Idec Inc. has been extracted. Because Biogen and Idec were both individual company names before a merger that created Biogen Idec Inc., identifying Biogen Idec Inc. as a single entity is important information that marks this article’s content as referring to a time after the two companies merged to become the world’s largest biotechnology company. Similarly, terms like net income, consensus estimate, and earning per share all have very specific meanings to corporate finance analysts that are highly relevant to knowledge discovery activities relating to corporate events. Term-level extraction also better integrates with concept-level categorization of documents, which appends concept names descriptive of a particular document that may not actually appear as terms in that document. For instance, in the fragment illustrated in Figure XII.6, concept tags such as midcap, earnings report, publicly held (versus privately held), or company-neutral (as opposed to company-positive or company-negative) might also be automatically or manually added to the postpro-cessed document to provide useful supplementary information to the entity-related data revealed by the term extraction process so as to enhance the quality of subse-quent information extraction–oriented processing. After completing the term extraction processes, Industry Analyzer subjects docu-ments to a rule-based information extraction process based on a simplified version of the DIAL language described in Appendix A. By taking advantage of sets of formal financial and biomedical rule bases, Industry Analyzer is able to identify not only repeated instances of patterns involving entities but to construct basic “facts” (e.g., Biogen Idec is the world’s largest biotech company, AlphaCo and BetaCo are strategic 284 Text Mining Applications FStrategicAllianceCCM(C1, C2) :-Company(Comp1) OptCompanyDetails "and" skip(Company(x), SkipFail, 10) Company(Comp2) OptCompanyDetails skip(WCStrategicAllianceVerbs, SkipFailComp, 20) WCStrategicPartnershipVerbs skip(WCStrategicAlliance, SkipFail, 20) WCStrategicAlliance verify(WholeNotInPredicate(Comp1, @PersonName)) verify(WholeNotInPredicate(Comp2, @PersonName)) @% @! { C1 = Comp1; C2 = Comp2} ; Figure XII.7. IE rule for identifying a strategic partnership between two companies. (Courtesy of the ClearForest Corporation.) alliance partners, or third quarter net income for Acme Biotech was $22 million) and “events” (e.g., the Glaxo and Smith Kline merger, the filing of BetaCo’s Chapter 11 bankruptcy, or the Theravance IPO) involving entities derived from these patterns. An example of a DIAL-like rule can be found in Figure XII.7. This rule is one of several possible ones for identifying strategic alliances between companies (within a rule-based language syntax like that of the DIAL language). Note that the rule also includes a few constraints for discarding any potential pairings between a company and a person as a strategic alliance. After rule-based information extraction has been completed, a queryable pro-cessed document collection is created that not only contains entity-related informa-tion but also information related to a large number of identified “facts” and “events.” A formal list of the types of facts and events identified within this document collec-tion is also stored and made available to support fact- or event-based querying by Industry Analyzer’s core mining operations. Core Mining Operations and Refinement Constraints Industry Analyzer supports a reasonably broad range of common text mining query types like those discussed in Chapter II. It supports queries to produce various distri-butionandproportionresultsaswellastheabilitytogenerateanddisplayinformation relating to frequent sets and associations found within its document collection. Indus-try Analyzer can also support the construction and execution of maximal association queries. As a general rule, with most current corporate finance–oriented text mining appli-cations, there is little need for exotic query types. What is more important for the vast majority of corporate finance users is (a) a rich and flexible set of entities, fact types, and event types with which to shape queries, and (b) a relatively easy way to generate and display results that lead to iterative exploration of the data stored in the document collection. For instance, queries, regardless of the kind of result set display chosen by the user, should be easily constructible on the basis of combinations of entity, fact, and event information for corporate finance users. To support this, Industry Analyzer offers GUI-driven query generation with prepopulated pick lists and pull-downs of entities, fact types, and event types for all of its main forms of queries. Figure XII.8 shows how a query can be generated in industry Analyzer. XII.2 Corporate Finance 285 Figure XII.8. Generating a query of the type “entity within the context of” in Industry Analyzer. (Courtesy of the ClearForest Corporation.) In addition to these ease-of-use features, Industry Analyzer has the capability of offering a user a menu of common queries. The prototype described here contains preset, distribution-type queries for querying a company name supplied by the user in the context of merger, acquisition, and strategic alliance. It also contains a list of preconfigured association-type queries for a given company name and product, other company, and person. These extremely easy-to-execute queries allow even less technically literate analysts or infrequent users of the Industry Analyzer system to derive some value from it. On the other hand, more experienced users can leverage such preconfigured queries as a quick way to create a broad “jump-start” query that can then be shaped through refinement, browsing, and further query-based search. Refinement constraints carefully customized to the needs of corporate finance professionals can also do much to achieve the goal of making knowledge-discovery query operations intuitive, useful, and iterative. Industry Analyzer supports a wide range of background, syntactical, quality-threshold, and redundancy constraints. Presentation Layer – GUI and Visualization Tools Industry Analyzer approached the design of its GUI and its choice of visualization approaches with the understanding that industry analysts are savvy computer users but that they are not typically programmers. As a result, Industry Analyzer’s GUI can be seen to be more of a dashboard or workbench through which nearly all func-tionality is available via graphical menus and pop-ups – and, consequently, with less emphasis on giving the analysts direct, script-level access to the underlying query lan-guage (or to any of the preprocessing routines such as the rule bases for Information Extraction, etc.). A very important part of creating a custom application for a particular audience of users is ensuring that presentation-layer elements speak to the specific knowledge discovery needs of that audience. Much of this can be accomplished by having all knowledge discovery search and display elements “make sense” in terms of the nomenclature and taxonomical groupings relevant to the audience – that is, by simply exposing the domain-relevant entity, fact, and event information derived both by background knowledge sources and by preprocessing extraction routines. 286 Text Mining Applications Figure XII.9. GUI-enabled pick list for selecting facts and events for a visualization. (Courtesy of the ClearForest Corporation.) Industry Analyzer attempts to accomplish this last objective through its reliance on consistent display of an entity hierarchy for queries including entity names in their search parameters. The system reinforces this main GUI-enabled search construction paradigm through an additional emphasis on supporting the construction of fact- and event-oriented searches via consistent pull-down and pop-up listings of event and fact types. The application also attempts to use these same pick-lists for formatting visualizations in an effort to acclimate users further to a familiar listing of these event and fact types (see Figure XII.9). In terms of display and visualization of search results, Industry Analyzer defaults to returning search results in a simple table view display. See the example in Figure XII.10. Industry Analyzer also supports a range of more sophisticated visual graphing and mapping options for displaying query results, including the following: ■Histograms for visualizing absolute occurrence number counts, counts in context, and distribution-type queries; ■Simple concept graphs for clearly displaying relationship information between entities, facts, or events; and ■Circle graphs for visualizing associations, associations within context and orga-nized according to some taxonomical ordering, and large-scale relationship maps with large numbers of constituent points. Such visualizations can be executed either from a dedicated pop-up GUI after a query is run or directly from the table view display for a partial or full set of query XII.2 Corporate Finance 287 Figure XII.10. Table view of connections between persons and companies in the context of management changes. (Courtesy of the ClearForest Corporation.) results. For example, a user could interactively decide to generate a circle graph visualization after choosing the first two companies from the query results displayed in the Table View illustrated in Figure XII.10. The resulting circle graph can be seen in Figure XII.11. Finally, from any of the result-set displays, a user can interactively navigate through point-and-click actions to either a summarized or full, annotated version of the actual document text underlying the query results. This allows a corporate analyst to move quickly from high-level entity, fact, and event views that consider Company Prometheus Labora... Trubion Pharmaceu... Michael Allen Toni Wayne Judith Woods Person Figure XII.11. Circle graph showing connections between people and companies in the context of management change. (Courtesy of the ClearForest Corporation.) 288 Text Mining Applications information based on all or part of a corpus of documents to more specific informa-tion included in a particular article. XII.2.2 Application Usage Scenarios Corporate finance professionals in banks and corporate M&A groups often have overriding themes for the research that they do. We can examine how a few of these might be translated into typical interactions with Industry Analyzer’s knowledge discovery capabilities. Examining the Biotech Industry Trade Press for Information on Merger Activity Much of the time of corporate finance analysts at investment banks and in the corpo-rate development organizations of companies is spent tracking potential and actual mergeractivity.Knowingaboutspecificpotentialmergerscanallowcorporatefinance professionals to get involved with the participants in those transactions before a deal is completed, or better advise existing clients about the prospective ramifications of a transaction on the overall market in a timely fashion. Having a full and current sense of completed mergers in an industry vertical is critical to understanding the competitive landscape within that vertical; this type of knowledge is a necessary com-ponent to client advisory in a competitive corporate environment in which all public corporations – and most private – are potential merger targets. Trends in merger activity can also directly affect many specific areas of a corporate analyst’s work. For instance, if there is a sudden up-tick in trade journal reports of merger activity, an overall industry consolidation may be taking place. If mergers are all clustered around a common set of product types, merger activity may suggest a hot area for new products because the activity may reflect pent-up customer demand or failures among some big companies’ in-house R&D efforts on these products. Conversely, a leveling off of valuations paid in cash and stock for the smaller of the merged entities may suggest some cooling or maturation of the markets that might indicate less favorable short- or midterm trends. To explore merger activity with Industry Analyzer, an analyst might initiate a broad search for all “merger” events among biotech companies available in the documentcollectiontoseeifanylookinterestingandworthyoffurther,moredetailed investigation. In Industry Analyzer, this can be done simply through a series of graphical menus, starting with a pick list of events and facts available for search selection (see Figure XII.12). A subsequent pop-up menu offers the ability for a user to add constraints to the query. In our example, the user chooses not to add any constraints to his or her query because of the modest number of articles in the document collection. After the user clickstheOKbuttontoinitiatethequery,thedefaulttableviewbringsbackformatted results for the query results (see Figure XII.13). At this stage, the analyst decides that he or she is quite interested in looking at a reference to the Biogen–Idec merger. The analyst can then simply highlight the desired event and click on it to “jump” either to a summarized version of the relevant article or an annotated view of the full text of the article containing the XII.2 Corporate Finance 289 Figure XII.12. Initiating a query for all merger events in the document collection. (Courtesy of the ClearForest Corporation.) highlighted merger reference. Figure XII.14 shows what the analyst would encounter upon choosing the full, annotated view of the article text. When moving from the table view entry to the annotated full text of the article, the analyst is taken to the point in the article containing the reference. A consistent color-coding coding methodology for identifying event and fact types can be used throughout Industry Analyzer. This color-key approach can be used to identify event and fact occurrences visually by type in document text; it can also be used in a way that is carried over consistently into graph-type visualizations. In both cases, the user may choose which color-key scheme works best for his or her purposes. Exploring Corporate Earnings Announcements Corporate finance analysts also expend a great deal of effort finding financial reports on companies that they do not typically follow. Unfortunately, although public company financial announcements are widely available, it is often very Figure XII.13. Table view of all reported mergers between biotech companies in the document collection. (Courtesy of the ClearForest Corporation.) 290 Text Mining Applications Figure XII.14. A merger event in a BioWorld article. (Image courtesy of the ClearForest Cor-poration. Text in image courtesy of BioWorld.) time-consuming for the analyst to try to find the needed financial information by wad-ing through large numbers of inconsistently formatted financial reporting documents and press releases. Industry Analyzer’s functionality makes it a much easier process to investigate corporate financial report information in the application’s document collection. Indeed, it may not even be practically possible to find certain types of financial reporting information without an application like Industry Analyzer. Suppose an analyst vaguely remembers hearing that a biotech company reported net income in the $130–140 million range for the third quarter of 2004. This analyst, however, does not remember or never learned the name of the company. In this type of situation – which represents one potential variation on very common scenario in which an ana-lyst has only partial bits of tantalizing information about a company but not its name – the analyst could spend days sifting through voluminous financial reports issued by larger biotech and pharmaceutical companies before happening upon the right one. The analyst might further want to know if more than one company met the criteria for the period. Hunting and pecking through financial reports and online databases even for a few days might not bring a sense of resolution for this search because so many traditional corporate information resources are (a) published in various types of financial reporting documents and are (b), in general, rigidly organized around explicit knowledge of a company name one is interested in investigating. The first step in Industry Analyzer would be to execute a query on the event type called Company Earnings Announcement. The analyst could find this by simply scrolling the main pick list of fact and event types in Industry Analyzer’s main search menu (see Figure XII.15). Unlike the earlier simple search for all companies reporting mergers in the docu-ment article collections, this query also relies on some adjustment of constraint filters to help pinpoint potentially useful answers to the analyst’s query. As can be seen in Figure XII.15, after selecting Company Earnings Announcement as the main event for his or her query, the analyst is given an opportunity to choose certain types of constraints. In the current example, the constraint filters are set to allow viewing of several attributes that have been extracted and identified as relating to Company Earnings Announcement–type events. The table view of the result set from this query can be seen in Figure XII.16. XII.2 Corporate Finance 291 Figure XII.15. Constructing a query around company earnings announcement events. (Cour-tesy of the ClearForest Corporation.) Scanning the columns of the table view would show that Biogen Idec meets the criteria being sought by the analyst. The analyst can then highlight the appropriate table row, click, and choose to see a summary of the text in the article containing the original reference (see Figure XII.17). Of course, the quality of search results for this type of information within a text mining application like Industry Analyzer will depend on the comprehensiveness of the data in the documents of the document collection and the sophistication of the various information extraction rule bases used during preprocessing to identify events and link relevant attributes properly to these events. However, querying for partially known corporate information against the vast amounts of text documents in which it is buried can be made vastly more practical with Industry Analyzer. Figure XII.16. Table view of company earnings reports. (Courtesy of the ClearForest Corporation.) 292 Text Mining Applications Figure XII.17. Browsing the summary view of the article containing the reference to Biogen Idec’s third-quarter 2004 net income. (Courtesy of the ClearForest Corporation.) Moreover, the result sets that are returned for queries encourage analysts to browse related data that may also be interesting and relevant to their overall mission. Exploring Available Information about Drugs Still in Clinical Trials Corporate analysts can also turn to the visualization tools of an application like Industry Analyzer to help explore the available information about a particular pharmaceutical or biotechnology product. Perhaps a corporate finance analyst is aware of a potential client’s interest in new cancer treatment drugs – especially lung cancer treatment drugs that show early signs of promise as products. The ana-lyst may have only a general awareness of this market niche. He or she may thus want to leverage Industry Analyzer’s visualization tools more directly to ferret out potentially interesting information from the articles in the applications document collection. As a first step, the analyst may want to generate a query whose result set could be used to create a visualization of all connections (associations) between drugs and particular diseases in the BioWorld articles to get a quick sense of the drugs that might be related to cancer treatment within the overall universe of drugs mentioned in the trade press. An example of the result set visualization can be seen in Figure XII.18. After viewing the circle graph visualization that identifies all the relationships, the analyst might note the relationships between lung carcinoma and the anticancer drugs Erlotinib and Cisplatin. Note that by simply highlighting any node along the circumference of the circle graph, a user can also highlight the edges representing connections with various other associated nodes in the figure. For instance, as illustrated in Figure XII.18, if the user highlighted the disease Rheumatoid Arthritis, the four drugs with which this disease is associated in the underlying document collection would be identified. This type of visualization functionality would allow the user to bounce quickly among diseases and drugs and to browse their various relationships at a high level. After looking at this broad view of drug-disease relationships, the analyst might want to start focusing on cancer-related drugs and execute a distribution-type query showing all occurrences of drug names in BioWorld articles given a context of carcinoma and a proximity constraint of “within the same sentence.” This would give the analyst a list of the most prominent anticancer drugs (i.e, those most fre-quently mentioned over a given period of time in the trade press). The analyst would then be able to generate a result set in a histogram like the type shown in Figure XII.19. XII.2 Corporate Finance 293 DiseaseNCI Lung Carcinoma Hepatitis C Colorectal Carcinoma Diabetes Mellitus Drug NCI Antineoplastic Va... Uridine Atvogen Cetuximab Melanoma Vaccine Interferon Alfacon-1 Antiviral Agent Adefovir Gemcitabine APC8015 Vaccine Rituximab Methotrexate Infliximab Etanercept Cisplatin Testosterone Bortezomib Interferon Beta-1A Erlotinib Ribavirin Therapeutic Insulin Bevacizumab Carcinoma Bipolar Disorder Arthritis Chronic Fatigue S... Crohn's Disease Melanoma Hepatitis B Pancreatic Carcinoma Prostate Carcinoma Sexual Dysfunction Plasma Cell Myeloma Multiple Sclerosis Rheumatoid Arthritis Figure XII.18. Circle graph showing connections between drugs and diseases. (Courtesy of the ClearForest Corporation.) Exploring this graph might lead the analyst to the question: What are the cross-relationships, if any, between these drugs? Again, the analyst could generate a circle-graph relationship map, this time showing connections between drugs in the context of carcinoma (Figure XII.20). Seeing Cisplatin in each of the last two graphs, the analyst might be interested in generating a query on Cisplatin and related companies within the context of Bevacizumab Gemcitabine Erlotinib Antineoplastic Vaccine BAY 43-9006 Cisplatin Cetuximab APC8015 Vaccine Monoclonal Antibody Motexafin Gadolinium Nicotine Opioid Recombinant Interferon Alfa-2b Signal Transduction lnhibitor Sm 153 Lexidronam Pentasodium Squalamine Trastuzumab Vinorelbine Virulizin Vitaxin 0 1 2 3 4 Figure XII.19. Histogram showing occurrences of drug names in BioWorld (context of carci-noma, in the same sentence). (Courtesy of the ClearForest Corporation.) 294 Text Mining Applications Cetuximab Erlotinib Biological Agent Monoclonal Antibody Vinorelbine Cisplatin Sm 153 Lexidronam... Opioid Docetaxel ISIS 3521 Lipid Gemcitabine Motexafin Gadolinium Glufosfamide Virulizin Bevacizumab Recombinant Inter... Figure XII.20. Circle graph of relations between drugs (context of carcinoma, within the same sentence). (Courtesy of the ClearForest Corporation.) carcinoma. A simple concept graph could be generated for the results of this query (see Figure XII.21). Thicker lines between concepts in this graph are weighted to a potentially more meaningful connection according to some quality measure. The analyst could then discover more by clicking on the edge linking Cisplatin and Isis Pharmaceuticals to jump to a summary of the article containing the two concepts. See Figure XII.22. The summary of the article can then be read quickly for business news related to the two drugs. From this Title Browser interface, the user can also jump to the full text of the article. XII.2.3 Citations and Notes Some resources on text mining applications for finance include Spenke and Beilken (1999), IntertekGroup (2002), and Kloptchenko et al. (2002). IndustryAnalyzerisnotacommercialproduct,andtheimplementationdescribed in this chapter, although fully operational, was completed for demonstration pur-poses. This application, however, does leverage some commercial technologies, particularly in its presentation-layer elements, derived from ClearForest Corpora-tion’s commercial product ClearForest Text Analytics Suite. All rights belong to the owner. Information on ClearForest and its text analytics products can be found at <www.clearforest.com>. The Thomson BioWorld Online homepage can be found at <www.bioworld. com>. Textual content for articles used in the Industry Analyzer examples comes from BioWorld. All rights to the content from the BioWorld articles used for the examples in this section are held by their owners. A comprehensive listing of BioWorld publications can be found at < accumedia.web.Dispatcher?next=bioWorldPubs>. A public version of the National Cancer Institute’s Metathesaurus is available at Pharmacyclics Inc. Cisplatin Isis Pharmaceutic... Figure XII.21. Simple concept graph of cisplatin and related companies within the context of carcinoma. (Courtesy of the ClearForest Corporation.) XII.3 A “Horizontal” Text Mining Application 295 Figure XII.22. Annotated view of the summary of the article containing both Cisplatin and Isis Pharmaceuticals within the context of carcinoma. (Courtesy of the ClearForest Corporation.) XII.3 A “HORIZONTAL” TEXT MINING APPLICATION: PATENT ANALYSIS SOLUTION LEVERAGING A COMMERCIAL TEXT ANALYTICS PLATFORM Patent research and analysis have become the business foci of a growing number of professionals charged with helping organizations understand how best to leverage intellectual property – and avoid problems created by other organizations’ intellec-tual property rights – in their business activities. Patent research encompasses a wide range of at least partially related activities involving the investigation of the registra-tion, ownership rights, and usage, of patents. A common denominator among almost all of these activities, however, is the need to collect, organize, and analyze large amounts of highly detailed and technical text-based documents. Patent analysis solutions might be called “horizontal” applications because, although they have a narrow functional focus on patent-related documents, patent analysis has wide applicability to many different businesses. Professionals in both public and private companies across many different industries – not to mention the intellectual property (IP) departments of many law firms and consultancies – have responsibilityforprovidinginputintocorporatepatentstrategies.Suchinputneedsto take into account not just the potentially “harvestable” IP that a particular company may have but also all of the published indications of IP rights that other companies may already possess. Patent strategy, however, is not just about the particulars of what is patented (both in-house or within the larger market) but also relates to which individuals and companies are creating patents (and for what technologies), which companies are licensing patents (and for what technologies), and the relationships between these various market participants. Patent research should yield more than good defensive information about IP; it should also yield new business opportunities by identifying new development partners, likely licensers or licensees, or wider market trends about particular types of patents. This section describes a patent analysis application called Patent Researcher. One of the interesting aspects of presenting this application is how it can leverage the functionality of a more generic commercial text mining platform derived primarily from components marketed by ClearForest Corporation. Patent Researcher processes patent-related documents – primarily granted U.S. patents – to enable a patent manager to mine information related to patented technologies, patent claims, original patent inventors, original patent owners, and patent assignees by means of queries and interactive visualization tools. 296 Text Mining Applications Tags Entity Discovery, Entity Recognition, Categorization Industry Modules Relationship Discovery, Relationship Recognition Analytics Link Detection, Visualization, Reporting Tagging Analytics Figure XII.23. ClearForest Corporation’s approach to describing the high-level architecture of the ClearForest text analytics suite. The implementation of the Patent Researcher application discussed in the follow-ing sections has been configured to handle typical patent analysis activities. A sample corpus of patent documents related to commercial defibrillator device technologies is used as a raw data source. This illustrative implementation reflects the types of application settings and usage one might encounter among patent professionals in corporate intellectual property departments, attorneys at patent-oriented law firms, or patent engineers at specialist engineering consultancies. XII.3.1 Patent Researcher: Basic Architecture and Functionality Patent Researcher relies on ClearForest Corporation’s suite of text mining soft-ware components and, as a result, shares the general architecture of these compo-nents. ClearForest’s Text Analytics Suite follows a general pattern not too unlike that described in the Section I.2, though the company’s high-level description of it has perhaps been made simple for the commercial market. See Figure XII.23. Effectively, ClearForest’s platform follows in a generalized way most of the archi-tectural principles presented in Section I.2 and in Section XII.2.1 of this chapter’s discussion of Industry Analyzer. On a slightly more granular level, the functional architecture for Patent Researcher is described in Figure XII.24. Preprocessing Routines Categorization, Term Extraction, Proprietary Industry Rulebook for Information Extraction Processed Document Collection Patents Core Mining Operations Pattern Discovery Algorithms, Trend Analysis Patent Manager Background Knowledge GUI Browsing, Query Construction, Visualization Refinement Filters Figure XII.24. Patent Researcher’s functional architecture. XII.3 A “Horizontal” Text Mining Application 297 Although quite similar to the overall architecture described for the Industry Analyzer application illustrated in Section XII.2.1, Patent Researcher’s architecture evinces a few distinct differences and leverages several advantages provided by the commercial ClearForest Text Analytics Suite platform. The most notable of these are discussed in the following sections. Data and Background Knowledge Sources The raw data for the Patent Researcher application consists of 411 granted U.S. patents for external defibrillator devices. A “quick search” for the key term “external defibrillator” was executed at the U.S. Patent and Trademark Office Web site to find 411 patents for the period from 1 January 1976 to 1 January 2004. These full-text, semistructured (HTML-formatted) documents constitute the target corpus for preprocessing operations. Patent Researcher has the capacity to integrate various external thesauri, tax-onomies, and ontological dictionaries. These knowledge sources are particularly valuable when document collections need to include trade journals, news articles, or internal corporate documents in which there is an increased need to infer mean-ing or relationship information about extracted concepts. However, even in implementations of patent analysis text mining applications – like the one discussed here – that make use of formal granted-patent documents as their exclusive source of document data, background knowledge sources can be particularly helpful in preprocessing extraction activities for entities, facts, and events involving technical terminology, company names (often as patent assignees), and legal language. The Patent Researcher implementation described here uses only three simple knowledge sources: a lexicon of English words; a simple, manually created dictionary of important legal and patent terms; and a simple dictionary of corporate names. Preprocessing Operations Patent Researcher’s preprocessing operations have similarities with those discussed for the Industry Analyzer application. Both rely on term extraction and information extraction techniques to create a processed document collection that identifies enti-ties, facts, and events. In addition, both use very similar rule-based languages (Patent Researcher uses a robust commercial version of the DIAL language). One significant additional approach of Patent Researcher, however, is its imple-mentation of categorization routines to automatically create a taxonomy for use in the application. Making use of the semistructured nature of patent documents (U.S. patent grants – especially after 1976 – generally follow a standardized U.S. Patent and Trademark Office format), automatic taxonomy generation typically yields a very useful taxonomy for organizing patent registrations and claim details, assignees, inventors, examiners, relevant corporate information, technical terms, as well as high-level categories for various aspects of patented items’ functionality, usage, related patents, and so on. A typical taxonomy displayed by Patent Researcher’s Taxonomy Chooser interface is shown in Figure XII.25; the figure illustrates the hierarchical positioning and a partial listing of relevant invention terms. Patent Researcher implements a productionized version of a Ripper-like machine-learning algorithm (for more description on this algorithm, see Chapter III). 298 Text Mining Applications Figure XII.25. Viewing the taxonomy created by means of automatic categorization in Patent Researcher. (Courtesy of the ClearForest Corporation and Joseph F. Murphy.) The Ripper algorithm has proven useful across a wide range of categorization sit-uations. In the case of patent information, the semistructured nature of much of the textual data would allow many categorization algorithms to perform relatively well, but the Ripper-like algorithm’s capacity for building constructing classifiers that allow what might be described as the “context” of a term impact – whether or not that term affects a classification – can be beneficial for categorizing large numbers of technical terms used in patent documents. Categorization is especially useful in patent analysis, for many very typical searches performed by patent professionals can benefit from using category infor-mation – particularly when using a category label, which may not be a term found within an individual patent document’s native set of extracted terms, as a part of a query’s context. For instance, patent investigators often use various intra-, inter-, and cross-category searches to determine the answers to numerous questions about potential patent overlap or infringement, competitive landscape situations among patent assignees and licensees, areas of potential synergy between assignees holding intellectual property in similar or complementary business areas, and patterns of development of new patents across niche business industries. Core Mining Operations and Refinement Constraints Patent Researcher is able to leverage the capabilities of the extremely large range of query types afforded by the commercial platform on which it is built. This means that the application provides a full range of distribution, proportion, frequent set, and association queries (including maximal association queries). Although patent managers certainly benefit from flexible exploration of patterns in the data of patent document corpora, Patent Researcher offers several preconfig-ured query formats that are used very often during patent analysis activities. These include the following: XII.3 A “Horizontal” Text Mining Application 299 ■Frequency distribution of assignees with patents in the current collection; ■Distribution of patents representing a particular technology over time; and ■Assignees that appear together on the same patent (indicates joint venture, joint development, partnership, or other corporate relationship). Indeed, with respect to the second of these preconfigured queries, a particularly notable difference from the capabilities offered to users of Industry Analyzer is Patent Researcher’s ability to generate simple but extremely useful trend analysis results for patent professionals interested in diverse types of trend and time-series information. Granted patents have consistent descriptions for various important dates (date of initial application, patent issue date, etc.). These dates are extracted during preprocessing operations and made available to a wide range of distribution queries that can chart various micro- and macrotrends over time. Patent Researcher supports more narrowly focused knowledge discovery oper-ations such as allowing a patent manager to create a query yielding a result set describing the distribution of patents over time for a particular company. The appli-cation also has the ability to compare this distribution against the distribution of patents issued for a competing company over the same period. To allow examination of wider trends, Patent Researcher supports queries that would permit a user to request all patents issued for several different broad areas of technology over some period. This type of search might reveal what areas of intellectual property are hot, which are growing cold, and which have dried up, based – all in relation to other areas of intellectual property – on patterns of patent application and issuance. Patent Researcher supports a wide assortment of constraints on nearly all of its query types, including typical background, syntactical, quality-threshold, and redun-dancy constraints. The application also supports time-based constraints on many queries, allowing variation on trend analysis queries and flexibility in comparing distributions over different time-based divisions of the document collection. Presentation Layer – GUI and Visualization Tools Patent Researchers offers patent professionals an extremely rich selection of graphi-cal browsing and visualization tools. Queries are all performed from GUI screens that make it possible for users to populate search input variables from either pull-downs or pop-up hierarchies of entities, facts, events, and categories appropriate as input to the query. Almost all queries in the system – including preconfigured queries – support various constraints that can be chosen from scroll-bars or pull-downs. Users can browse hierarchies of categories, entities, facts; and events. From many of the hierarchy browser screens, patent professionals can pull up various GUI-type query screens and use highlighted elements of a hierarchy to populate queries – that is, users can easily move from browsing the hierarchies to executing queries based on information discovered while browsing. In addition, users can pull up full or partial listings of all patents in Patent Researcher’s document collection and browse these listings by patent title. From this Title Browser, patent professionals can shape the ordering of patents within the browser by a several parameters, including date, category, and assignee. By clicking on any title in the title browser, a user can be brought to either an annotated view 300 Text Mining Applications External defibrillator Agilent Technologies, Inc. Medtronic Physio-Control Manutac ZMO Corporation Heartstream Inc SurvivaLink Corporation UAB Research Foundation Hewlett-Packard Company Pacesetter, Inc. InControl, Inc. Jamchief khosrow Medical Research Laboratories, Inc. Laendal Medical Corp. Defibtech LLC Cardiac Pacemakers Inc. Laerdel Medical AS Physio-Control Corporation Physio-Control Manufacturing Cor... Kanindyle Philips Electronics... Cardiac Science Inc. Koninkl Philips Electronics... Figure XII.26. A Patent Researcher spring graph of concepts associated with the concept “external defibrillator.” (Courtesy of the ClearForest Corporation and Joseph F. Murphy.) of the full text of the associated patent document or a URL link to the actual patent document on the U.S. Patent and Trademark Office Web site. Upon executing a formal query, a user of Patent Manager will receive the answer set in a default table-view format. Even trend data in the result sets from trend analysis queries are available in a table view. From this table view, a patent professional can perform a few functions. First, he or she can generate a visualization of the data in a result set. Second, the user can access a pop-up query screen and reexecute the query with different input variables or constraints. Third, the user can click on a row in the table view to move to a pop-up of the Title Browser with a listing of relevant patents, from which he or she can navigate to the full text of a patent. Finally, a comma-delimited file of the data displayed in the table view can be downloaded. In addition to basic histograms, circle graphs (as relation or association graphs), and simple concept graphs common to text mining applications (and seen in Indus-try Analyzer), Patent Researcher also supports the following visualization types: circle graphs (as category graphs), line- and histogram-based trend graphs, spring graphs, and multigraph and hybrid circle graph visualizations. Figure XII.26 shows an example of a Patent Researcher spring graph. All visualizations implement a consistent point-and-click paradigm that allows a user to highlight any point in a graph and click to navigate to actual patents related to the highlighted entity, fact, event, or category. Typically, when a highlighted node in a visualization is clicked, the Title Browser pop-up is initiated. From this pop-up, a user can either elect, as usual, to click on a particular patent title and go to an annotated version of the patent text in Patent Researcher’s document collection or click on the URL for the live version of the appropriate patent on the U.S. Patent and Tradmark Office’s official Web site. XII.3.2 Application Usage Scenarios Patent Researcher has several common usage scenarios. A brief review of a few of these from the perspective of the patent professional can be useful for understanding XII.3 A “Horizontal” Text Mining Application 301 Figure XII.27. Distribution analysis query screen. (Courtesy of the ClearForest Corporation and Joseph F. Murphy.) how text mining techniques can be leveraged to provide solutions to real-world business problems involving the management of intellectual property. Looking at the Frequency Distributions among Patents in the Document Collection Among the most basic and common tasks that patent professionals perform is the review of large numbers of patents to discern the frequency of patent activities among companies with respect to a given area of technology or business focus. Patent Research makes such a review a simple and easy process for patents in its collection. In a typical scenario, a patent manager might be interested in exploring the dis-tribution of patents among assignees in the field of external defibrillators. First, he or she would execute a distribution analysis query for all assignees with relevant patents (see Figure XII.27). After executing the query, the user would receive a table view of all assignees ordered according to the number of patents they had been issued in the document collection. From this screen, a quick histogram graphically demonstrating the distri-bution pattern could be generated. The resulting graph is shown in Figure XII.28. This graph shows that SurvivaLink, InControl, and Heartstream are the top three assignees for external defibrillator patents in the document collection with Medtronic andAgilentTechnologiesroundingoutthetopfive.PatentResearcher’svisualizations are highly interactive, and thus by clicking on either one of the assignee names on the left side of the graph or one of the histogram bars on the right, a user is given a choice of editing his or her original query and generating a refreshed graph for the new search results or of seeing a Title Browser listing of all patents connected to the assignee chosen. Let us say our user clicks on the fourth-place assignee, Medtronic, and chooses to see a Title Browser listing Medtronic’s patents. A Title Browser in a pop-up like the one illustrated in Figure XII.29 will then appear. After navigating through the list of patents, the user could select all and see a listing that includes a few descriptive sentences extracted from each patent as an aid to browsing. The user could also click on a patent and go either to an annotated 0 5 10 15 20 25 30 35 40 45 SurvivaLink Corporation InControl Inc. Heartstream Inc. Hedtranc Inc. Agilent Technologles Inc. Kanindyle Philip Electronic N.V. Cardiac Pacemakers Inc. Medtronic Physio-Control Manufacturing... Intermedics Inc. Gahrani Ltd. ZMO Corporation Pacesatre Inc. Venlnlec Inc. UAB Research Foundation Anpaion Corp. Physio-Control Corporation Physio-Control Manufacturing Lncoc inc. Cardiac Science Inc. CPRXLLC Duke University Medical Research Laboratories, Inc. Gaumard Science, Inc. Angeion Corp. (Assignee) Figure XII.28. Histogram of results from distribution query for issued patents by assignee. (Courtesy of the ClearForest Corporation and Joseph F. Murphy.) 302 XII.3 A “Horizontal” Text Mining Application 303 Figure XII.29. Patent Researcher’s title browser showing a listing of relevant patents. (Cour-tesy of the ClearForest Corporation and Joseph F. Murphy.) full-text version of the patent in the document collection or to the URL for the official patent text at the U.S. Patent and Trademark Office Web site (see Figure XII.30). Exploring Trends in Issued Patents Another usage scenario commonly encountered in patent analysis activities is explor-ing how patterns of patent issuance (or application) for particular technologies evolve. Understanding such trends can be critical to deciding whether a company should develop its own technology, patent its own technology, or attempt to license another company’s technology. These trends can also show whether the number of new patents is rising, plateauing, or falling, indicating the current strength of interest in innovation for a particular technology area. A patent manager might interpret a steadily increasing, multiyear trend in patents related to a technology that his or her client company is developing as encouraging becauseitshowsagrowinginterestinrelatedbusinessareas.Conversely,aprecipitous recent fall in the number of issued patents for a technology that comes on the heels of a sudden spike might indicate some sort of problem; perhaps a small number of patents have dominated the business area, or the underlying business demand for this technology area has cooled or even disappeared. Often, in answering these questions, it is useful to be able to compare trend lines for several different related events within the same visualization. Patent Researcher provides such trend analysis capabilities. Asamplepatentanalysisproblemmightinvolveapatentmanager’swantingtosee the trends for two different defibrillator technologies – “biphasic” and “auto external 304 Text Mining Applications Figure XII.30. The full-text version of a patent at the U.S. Patent and Trademark Office Web site. defibrillation” devices – from 1984 to the present. Further, the patent manager might want to compare these two trends against a trend line showing all the defibrillator patents issued to a single company, Medtronic, over the same period. Because patent documents provide very clear date-related information for several events – perhaps most notably patent application and patent issuance – preprocessing operations can comprehensively tag a document collection with the type of date-related information that allows analysis of trend information across documents – and generally facts and events as well – in the collection. In this case, the user would first go to the trend analysis query screen and set up the simple parameter of this search. The result set would initially be provided in a table view with rows representing one of the search concepts (auto external defibrillation, biphasic, Medtronic) and columns representing quarters. Columns could be set in increments as granular as one day or as broad as one year. The user could then elect to view the data as a line-based trend graph (see Figure XII.31). The user can set the graph to show a version of the table view at the bottom of the trend graph and examine the two views at the same time. The user can also scroll left or right to see the full extent of the timeline or call up his or her query screen again to change any of the parameters of the original search. Lines in the graph can be shown in different colors defined in a key at the bottom left part of the screen. For clarity of visualization, Patent Researcher can bundle related terms together under a single label. In the example shown within Figure XII.32, the second and third searched-for concepts (biphasic and Medtronic) are groupings of concepts under one label. This capability is generally important in text mining applications but is Issue Date One of autoexternaldefib One of biphasicterms One of Physio-Control Manufacturing Corporation, Physio-Control Corporation, Medtronic Physio-Control... Medtronic Physio-Control Manufacturing C... 10/1/2003 4/1/1997 10/1/1990 4/1/1984 0 1 2 3 4 5 6 7 8 9 10 11 12 Number of Patents Frequency of Issuance of AutoExternalDefibrillator, BiPhasic and Medtronic Patents Figure XII.31. Line-based trend graph. (Courtesy of the ClearForest Corporation and Joseph F. Murphy.) 305 Issue Date 10/1/2003 4/1/1997 10/1/1990 10/1/1990 4/1/1984 0 1 2 3 4 5 6 7 8 9 10 11 12 Number of Patents One of autoexternaldefib One of biphasicterms One of Physio-Control Manufacturing Corporation, Physio-Control Corporation, Medtronic Physio-Control... Medtronic Physio-Control Manufacturing C... Frequency of Issuance of AutoExternalDefibrillator, BiPhasic and Medtronic Patents Figure XII.32. Histogram-based trend graph. (Courtesy of the ClearForest Corporation and Joseph F. Murphy.) 306 XII.4 Life Sciences Research 307 particularly so in applications like Patent Researcher, in which many technical terms and corporate entity names may actually refer to the same real-world entity (or be more useful grouped together on an ad hoc basis because the terms belong to some logical set interesting to the user). Of course, a patent manager may also want to look at the same trend data in a way that shows the cumulative number of patents for all search-on entities while also mak-ing it visually clear what rough percentage each entity makes up of this cumulative number. For this, the patent manager could choose to generate a histogram-based trend graph. Figure XII.32 shows an example of this type of graph. In Figure XII.32, a portion of the table view of the concept-occurrence data is still viewable at the bottom of the screen. Patent Researcher allows a patent manager to move back and forth between the two trend graph visualization types or to generate each in a separate window to permit visual comparisons. XII.3.3 Citations and Notes Patent Researcher is not a commercial product. However, it has been partly based on feedback provided by ClearForest Corporation on real-world, commercial uses of the company’s Text Analytics Suite product by patent professionals. Ideas and input for the usage scenarios come from the work of patent attorney Joseph Murphy using the ClearForest Text Analytics Suite product. Joseph Murphy’s Web site can be found at <www.joemurphy.com>. XII.4 LIFE SCIENCES RESEARCH: MINING BIOLOGICAL PATHWAY INFORMATION WITH GENEWAYS GeneWays, Columbia University’s ambitious application for processing and mining text-based documents for knowledge relating to molecular pathways presents a con-trast to Industry Analyzer and Patent Researcher. Whereas Industry Analyzer and Patent Researcher have architectures that exhibit a relatively balanced emphasis on preprocessing, core mining algorithms, and presentation-layer elements (with a somewhat less aggressive emphasis on background knowledge and refinement tech-niques), GeneWays emphasizes complex preprocessing and background knowledge components with significantly less focus – at least up to the present – on query algo-rithm, presentation-layer, and refinement elements. These differences derive from several difficult challenges that arise in processing life sciences research documents containing molecular pathways information. The GeneWays application is an attempt to build a comprehensive knowledge discovery platform using several processes for high-quality extraction of knowledge from research papers relating to the interaction of proteins, genes, and messenger (mRNA). GeneWays’ RNA core mission is the construction – or reconstruction – of molecular interaction networks from research documents, and the eventual aim is to include information on all known molecular pathways useful to biomedi-cal researchers. As a first step, the application is focused on molecular interactions related to signal-transduction pathways. 308 Text Mining Applications Download Agent Research Documents from WWW Preprocessing Routines User Term Identifier TM & Presentation (‘CUtenet’) Visualization, Browsing, Query Construction, Background Knowledge Sources Homonym/Synonym Resolver Term Classifier Parsing Engine Relationship Learner Simplifier Interaction Database Figure XII.33. GeneWays’ functional architecture. XII.4.1 GeneWays: Basic Architecture and Functionality From published sources, GeneWays seems to follow along the lines of the same rough architecture for a text mining system shown in Industry Analyzer and Patent Researcher. However, GeneWays is a complex application, and specialized nuances appear in many elements of this architecture when the system is examined in any detail. A generalized view of this architecture can be seen in Figure XII.33. Data and Background Knowledge Sources Raw data for GeneWays comes from English language biomedical research docu-ments on molecular pathways that are downloaded from online World Wide Web resources; these documents are saved first in HTML and then converted into a basic XML format. After GeneWays’ extensive array of preprocessing operations are run against this still semiraw data, processed data are stored in an interaction database that combines entity information with relationship information to allow users to interact complex network-type models of molecular-level protein interactions. GeneWays uses several aids to providing background knowledge to inform its various operations – particularly preprocessing operations. The GenBank database is used to furnish expert knowledge for protein extraction activites at the term level in the Term Identifier; GenBank is supplemented by information from the Swiss-Prot database for further tagging activities in GeneWays’ Parsing Engine GENIES (GENomics Information Extraction System), which performs a few additional tag-ging roles and acts as the GeneWays’ information extraction utility. In addition, GENIES has the ability to make use of external lexicons and formal grammars appropriate to life sciences applications. Preprocessing Operations Textual data containing molecular pathways information have been described as “noisy data” because it is not a straightforward or simple task to identify enti-ties or interaction-type relationships reliably from scientific literature. GeneWays’ preprocessing operations represent one of the most advanced attempts to extract XII.4 Life Sciences Research 309 GENIES Term Tagger Pre-Processor Parser Error Recovery GenBank Swiss-Prot External Lexicon Lexical Grammar Pre-Tagged and Disambiguated Text Structured Text (Semantic Tree Structure Showing Nested Relationships) GENIES Figure XII.34. GENIES information extraction subsystem. information and allow scientists to explore high-quality models of protein interac-tion networks based on natural language processing techniques. After biomedical literature has been culled from various Web sources, it is sorted and converted into a basic, tagged XML-like format. GeneWays’ first major prepro-cessing operation is the Term Identifier module, which extracts biologicallysignificant concepts in the text of documents, such as the names of proteins, genes, processes, molecules, and diseases. After an initial set of such concepts has been identified, GeneWaysrunstheresultsthroughitsSynonym/HomonymResolver,whichattempts to resolve the meaning of a particular entity by assigning a single “canonical” name to each concept’s multiple aliases. The Term Classifier acts as a series of disambiguation operations are next run against these results of the Synonym/Homonym Resolver in an effort to resolve any sense ambiguities. After these three preprocessing operations have been run, GENIES begins its processing tasks. GENIES combines several processing activities in its operations; a generalized architecture can be seen in Figure XII.34. The GENIES system is based on the MedLEE medical NLP system and incorpo-rates both rules and external knowledge sources in its sophisticated term-tagging. It also extracts information to output semantic trees – essentially, a machine-readable format identifying nested relationships with normalized forms of verbs (e.g., bind, binding, and binder). The final preprocessing step for GeneWays is to take the nested relationship information output by GENIES and run a Simplifier process. This Simplifier converts nested relationships into a collection of more useful binary statements of the form “interleukin-2 activates interleukin-2 receptor,” which is a formal statement that includes two entities with an action verb. These statements are then saved directly into GeneWays’ Interaction Database. One final component of the preprocessing operations of GeneWays is the system’s Relationship Learner module. This takes the output of the Term Identification/Synonym-Homonym/Term Classifier processes and identifies new semantic patterns that can be examined by system developers and later incorporated 310 Text Mining Applications intoGENIES.However,theRelationshipLearnerisonlyrunduringsystemimprove-ment and maintenance periods and thus is not a normal part of production data processing operations. Core Mining Operations and Presentation Layer Elements At present, GeneWays’ query functionality appears primarily contained in a stand-alone front-end program called CUtenet, which, at the time of this writing, appears to offer limited but useful query functionality. The front-end “portal” to the GeneWays Interaction Database allows a user to retrieve interactions that answer particular query parameters and view these inter-actions in graphical form. An example of GeneWays’ GUI for displaying results of an interaction query can be seen in Figure XII.35. CUtenet can generate both full-color, three-dimensional visual representations of molecular pathways and two-dimensional models for faster rendering. The front end appears to be able to generate both simple concept graphs and complex network representations (see Figure XII.36). By interacting with the visualization (i.e., by clicking on edges connecting entities), a user can jump to the underlying binary statement and either build new queries around that statement or jump to the actual online full-text version of the article(s) from which the interaction came (this assumes that the user has an appropriate password for the journals or online sources hosting the online article). CUtenet also supports a fair amount of additional functionality. For instance, users can save images in various formats ranging from VRML to BMP, JPEG, PNG, and Postscript. Users can edit the layout and content of molecular map images. More importantly, they can actually edit the original pathway data in text format. CUtenet also supports an interface to allow a user to upload a single article into GeneWays for processing and storage of its molecular pathway information in the GeneWays Interaction Database; at the end of processing, the user can see a visual model of the molecular interaction in the article that he or he has input to the system. XII.4.2 Implementation and Typical Usage The GeneWays implementation hosted at Columbia University has extracted molec-ular pathways information related to signal transduction from more than 150,000 articles. The Interaction Database has been described as containing several hundred unique binary molecular pathways statements. Rzhetzky, Iossifov, et al. (2004) describes a typical user interaction with the appli-cation. If a user is interested in exploring interactions involving the protein collagen, he or she would enter the a query into CUtenet for all statements (binary formal statements each describing a formal protein-to-protein interaction) in the Interac-tion Database involving the concept collagen. The query would return a listing of all 1,355 interactions in the database involving collagen. The user can then choose to use GeneWays’ primary practical refinement filter based on a simple threshold parameter. This threshold filter allows a user to filter out all interaction statements that do not appear at least a certain number of times in unique sentences within the articles from which the Interaction Database was created. XII.4 Life Sciences Research 311 Figure XII.35. CUtenet’s “action table” – A query-results GUI for interaction information found in articles. (Courtesy of Andrei Rzhetsky.) If the user set the threshold requirement to request only interactions that appeared in the database at least 15 times from different unique sentences, the query would bring back a result set of 12 interactions. The user could then use CUtenet to generate a visualization like the one seen in Figure XII.37. By clicking on a given edge in the graph the user can see the specific interac-tion statement contained in the Interaction Database. Clicking a second time shows the exact sentence in the specific article from which this interaction was originally extracted. This is the primary query type supported by the system. However, by changing the input entity being searched for among statements and manipulating the simple threshold filter, a user can navigate through the entire molecular interaction model represented by the ontology-like Interaction Database. 312 Text Mining Applications Figure XII.36. A CUtenet dimensional visualization showing the PPAR protein’s interactions with other proteins. (Courtesy of Andrei Rzhetsky.) mmp-2 syk app decom callagenise vwl shamein ??? ??? Collapen Figure XII.37. Interactions involving Collagen that appear in at least 15 independent sen-tences. (From Rzhetsky, Iossifov, et al. 2004.) XII.4 Life Sciences Research 313 XII.4.3 Citations and Notes General articles relevant to text mining for exploring molecular pathways include Fukuda et al. (1998); Craven and Kumlien (1999); Rindflesch, Hunter, and Aronson (1999); Salamonsen et al. (1999); Rindflesch et al. (2000); Stapley and Benoit (2000); Dickerson et al. (2003); Pustejovsky et al. (2002); Caldon (2003); Yao et al. (2004); and Zhou and Cui (2004). GeneWays is a system in continuing development. Articles describing elements of the GeneWays project include Koike and Rzhetsky (2000); Krauthammer et al. (2000); Rzhetsky et al. (2000); Hatzivassiloglou, Duboue, and Rzhetsky (2001); and Rzhetsky et al. (2004). Examples in this section are taken from Rzhetsky et al. (2004), as is Figure XII.35. The generalized architectural views presented in Fig-ures XII.32 and XII.33 are derived in particular from Rzhetsky et al. (2000), Hatzi-vassiloglou et al. (2001), and Rzhetsky et al. (2004). Information on the National Center for Biotechnology Information’s (NCBI) Genbank database can be found at . A good description of the Swiss-Prot database is contained in Bairoch and Apweiler (2000). APPENDIX A DIAL: A Dedicated Information Extraction Language for Text Mining A.1 WHAT IS THE DIAL LANGUAGE? This appendix provides an example of a dedicated information extraction lan-guage called DIAL (declarative information analysis language). The purpose of the appendix is to show the general structure of the language and offer some code exam-ples that will demonstrate how it can be used to extract concepts and relationships; hence, we will not cover all aspects and details of the language. The DIAL language is a dedicated information extraction language enabling the user to define concepts whose instances are found in a text body by the DIAL engine. A DIAL concept is a logical entity, which can represent a noun (such as a person, place, or institution), an event (such as a business merger between two companies or the election of a president), or any other entity for which a text pattern can be defined. Instances of concepts are found when the DIAL engine succeeds in matching a concept pattern to part of the text it is processing. Concepts may have attributes, which are properties belonging to the concept whose values are found in the text of the concept instance. For instance, a “Date” concept might have numeric day, month, and year attributes and a string attribute for the day of the week. A DIAL concept declaration defines the concept’s name, attributes, and option-ally some additional code common to all instances of the concept. Each concept may have several rules, each of which corresponds to a different text pattern and each of which finds an instance of the same concept. Because each pattern is different, each rule will have separate code for handling that specific pattern. A text pattern is a sequence of text elements such as string constants, parts of speech (nouns, verbs, adjectives, etc.), and scanner elements (such as a capital word or a number) as well as other concepts and elements. 315 316 DIAL: A Dedicated Information Extraction Language for Text Mining The following is a simple example of patterns for finding instances of “Person”: concept Person{ attributes: string Title; string FirstName; string MiddleName; string LastName; }; rule Person { pattern: ”mr.”->title Capital->first Capital->last; actions: Add(Title<-title,FirstName<-first, LastName<-last); }; rule Person { pattern: ”dr.”->title Capital->first Capital->last; actions: Add(Title<-title,FirstName<-first, LastName<-last); }; rule Person { pattern: Capital->first MiddleNameConcept->mid Capital->last; actions: Add(FirstName<-first, MiddleName<-mid, LastName<-last); }; In this example, the concept “Person” has three different rules (with the same name as the concept), each of which has a different text pattern. Each rule finds an instance of “Person.” Note that not all patterns fill all attributes of “Person.” For instance, in the first rule, there is no value for the “MiddleName” attribute, whereas in the last rule there is no value for the “Title” attribute. A.2 INFORMATION EXTRACTION IN THE DIAL ENVIRONMENT There are many types and methods of information extraction. In the DIAL language, text patterns are defined, and an automated search for these patterns is executed. When a match for a pattern is found, the text matching the pattern is marked as an instance of the particular pattern’s concept. A DIAL module is a collection of concepts and their rules contained in one or more DIAL code files but defined by a single DIAL module file. Concepts grouped in the same module will usually have some common characteristic. For instance, there might be a module for finding different types of financial entities that con-tains the concepts of those entities and any utility concepts they rely on. Grouping DIAL code into different modules makes it easier to understand and maintain each A.2 Information Extraction in the DIAL Environment 317 IE Task IE Discovery Module Extraction Server Figure A.1. IE task as run by the extraction server. module, allows a single module to be changed and recompiled without having to recompile any other modules, and enables reuse of the module in several Discovery Modules. A DIAL Discovery Module is a collection of several DIAL modules and plug-ins, all of which are run on all text documents in order to perform a complete information extraction process. The DIAL language’s rules are developed in a dedicated develop-ment environment called ClearLab. This is a GUI application that enables editing and compiling of DIAL rules as well as running the information extraction process using these rules. One can then examine the results of the information extraction process and modify the rules where necessary. This is similar to the debugging process for any programming language. Once a DIAL Discovery Module has been developed, it can be used by a tag-ging server that has several components for text processing, including categoriza-tion, pattern-based extraction, topological extraction, and so on. The component that performs information extraction based on a DIAL Discovery Module is called IE Task. Figure A.1 describes the operation of the IE Task in the tagging server environment. 1. A document is sent to the Extraction Server for processing. 2. The Extraction Server then sends the document to the IE task for information extraction. 3. The IE task applies an IE Discovery Module, as defined for it by the Extraction server, to the document. 4. Information extraction results are returned from the IE task to the Extraction server. 5. The results are written to the Event database. 6. The results may then be accessed by a client application – for example, either a graphical application such as any Text Analytics application or a user-supplied application. 318 DIAL: A Dedicated Information Extraction Language for Text Mining A.3 TEXT TOKENIZATION Before performing pattern matching on a text, the DIAL engine requires that the text be tokenized. This is a process whereby the text is divided into units, most of which correspond to words in the language in which extraction is being performed. This is done before any pattern matching is carried out because pattern matching relies on the existence of tokens in the Shared Memory (SM). Here is an example of how the standard DIAL tokenizer operates on text: Untokenized text: “The CEO of U.S. Motors met today with James A. Smith, the founder of Colorado Engines.” Tokenized text: “the ceo of u.s. motors met today with james a. smith 0 1 2 3 4 5 6 7 8 9 10 11 12 13 14, the founder of colorado engines.” 15 16 17 18 19 20 21 Note that tokenized text omits capitalization and that punctuation and other nonalphanumeric characters are also tokens. Token numbering is zero based. When instances of concepts are found, they are stored with their offsets and lengths (in tokens) in relation to the entire text body being processed. For example, if our text body were the tokenized sentence above, we would store two instances of companies: “U.S. Motors” at token offset 3, with a length of 5 tokens, and “Colorado Engines” with a token offset of 19 and a length of 2 tokens. The standard DIAL English tokenizer, provided with all DIAL products, also determines which scanner properties apply to each token. Scanner properties are attributes of a single token’s characters such as Capital (begins with a capital letter), Number (expresses a numeric value), Alpha (contains only alphabet characters), and so on. A.4 CONCEPT AND RULE STRUCTURE This section presents a brief outline of the concept and rule structure of the DIAL code. The details of each code section, its use and various options, are omitted to keep the description simple. The DIAL code consists mostly of concept and rule code blocks. In a concept code block, we define the name and attributes of the concept as well as some operations and settings that we wish to be common to all instances of the concept. A concept may have one or more rule code blocks with the same name as the concept. These are the concept’s rules. A concept’s rules determine what patterns to search for when trying to find an instance of the concept and how to add an instance of the concept and with which attribute values. Both concept and rule blocks have sections with different names, each serving a certain purpose. Sections are headed by the section name followed by a colon. The following table summarizes the different sections and their associated con-cept or rule code blocks: A.4 Concept and Rule Structure 319 Code Block Section Description Mandatory Concept Attributes Defines the names and types of concept attributes. These are usually filled with values from the concept instance’s matching text. No Concept Guards Similar to rule constraints, concept guards are logical conditions on the concept attributes’ values that must be met. If they are not, the instance is discarded. No Concept Actions Code operations to perform after finding a concept instance. Concept actions are performed only if all the concept’s guard conditions are true or if the concept has no guards. No Concept Internal A section for defining internal concepts. These are concepts that can be used only within the scope of the concept in which they are defined and any inheriting concepts. No Concept Function A section for defining add-on (Perl) functions that can be used only in the scope of the concept in which they are defined and any inheriting concepts. No Concept Context Defines the text units in which to search for the concept instances. Usually this section will not appear, and then the concept will be searched for within the module’s context. No Concept Dependencies Permits definition of an explicit dependency of one concept on another. No Rule Pattern Defines the text pattern to match when searching for a concept instance. Yes Rule Constraints Defines logical conditions to apply to values extracted from the pattern match. If these conditions are not met for a specific match, the actions block of the rule will not be performed on that match. No Rule Action Code operations to perform after finding a pattern match. Among other things, this is where concept instances are added. Rule actions are performed only if all the rule’s constraints are met or if the rule has no constraints. Yes A.4.1 Context When the DIAL engine searches for concept instances, it does so within a certain context. The most commonly used context is a sentence, for most concept instances arecontainedwithinasinglesentence.A“Sentence”conceptisdefined,andinstances of this concept are found and inserted into the SM via a standard DIAL plug-in. If a concept is searched for within the context of a “Sentence,” each sentence instance is searched separately for the concept instances. This means that an instance’s entire 320 DIAL: A Dedicated Information Extraction Language for Text Mining token range must be within a single “Sentence” and cannot overlap with two or more “Sentences.” The section for defining the module context is located in the module file. The module context is the default context for all concepts in that module. Any context (both module context and concept context) may be a single concept name or a combination of concept names and logical operators (AND, OR, NOT). Most concepts use the module context. If a concept’s context section is missing or empty, that concept will use the module context. However, it is possible to override the module context for a specific concept and set it to be any concept required. For instance, a user might wish to search not within sentences but within the entire document or within each two or three consecutive sentences. Another example might be to apply certain concepts only to the document’s title (on the assumption that the title has a recognizable format) or to identify tables of numbers within a document and apply certain concepts only to them. Some examples are shown in the following code samples: concept TableColumnTotal{ context: NumberTable; //NumberTable is the name of a defined concept . . . }; concept TitlePerson { context: Title;// Title is the name of a concept . . . }; concept CompanyPersonnel { context: NOT ContactSection; //avoid searching the contact section of a document for //company personnel . . . } concept DocumentSubjects { context: Title OR FirstParagraph; //search only the title and first paragraph of //each document for its subjects . . . }; concept TitleSubjects { context: Title AND DocumentSubjects; //look only at document subjects found //in the title . . . }; A.5 PATTERN MATCHING Patterns are at the core of pattern-based extraction logic. The first operation to be performed on a rule is an attempt to match its pattern. Only if the pattern is matched is the rest of the rule code applied. A.6 Pattern Elements 321 A text pattern is a formal and general definition of what an instance of the con-cept looks like. A pattern is a sequence of pattern elements (which define types of text elements) with or without pattern operators. Pattern operators mostly denote something about the number of times the elements should appear or a relationship between pattern elements. For a text to match a pattern, it must match each of the pattern elements in the order in which they appear in the pattern. For example, the pattern Number “-” Number “-” Number “-” Number would be matched by the text: “1-800-973-5651.” On the one hand, a pattern should be general enough to match many instances of a concept. On the other, if a pattern is too general, it might match text that is not really an instance of the concept. Consider the following examples of patterns for Person: The pattern “Mr. John Smith” will certainly correctly identify all people referred to as “Mr. John Smith” but nothing else. This pattern is too specific. The pattern “Capital Capital” will probably match many names of people but will also match such texts as “Gen-eral Motors,” “Information Extraction,” and “Buenos Aires.” This pattern is too general. The pattern “Mr. Capital Capital” however, is a good example of a general pattern that will catch many of the required instances without mistakenly matching texts that are not names of people. It can be further enhanced as “Mr. Capital Capital? Capital”, where the Capital? element stands for an optional middle name. However, this pat-tern will match only male people whose title happens to be “Mr.” Additional rules would still be needed if this pattern were used. Clearly, pattern writing may require a great deal of fine-tuning and iterative improvement to ensure that all the required instances are found. A.6 PATTERN ELEMENTS A pattern element is a type of textual entity that may appear in a pattern. The patterns are the core elements that allow defining a pattern. A variety of options are provided to describe pattern elements, including exact string constants (e.g., “George Bush”); wordclasses (i.e., predefined lists of words or phrases); other concept names, regular expressions, scanner properties (e.g., AllCaps, digits, etc.); and wild cards. A pattern element is a type of textual entity that may appear in a pattern. 322 DIAL: A Dedicated Information Extraction Language for Text Mining The pattern elements are as follows: ■String Constants ■Wordclass Names ■Thesaurus Names ■Concept Names ■Character-Level Regular Expressions ■Character Classes ■Scanner Properties ■Token Elements A.6.1 String Constants String constants are elements consisting of characters surrounded by double quota-tion marks (“ ”). Note that a single string constant may contain several tokens. String constants in patterns are tokenized automatically, using the tokenizer defined in the module, which is also used on the text at runtime. This means that when the engine searches for a string constant, it will ignore letter case and token spacing, as shown in the example below. When a pattern is being matched, string comparison between string constants in the pattern and the text being matched is case insensitive. This means that, when let-ters are compared, upper- or lowercase versions of the same character are considered identical as follows: Example: Pattern Text Matching the Pattern “U.S.A.” “U.S.A.” “u.s.a.” “U.S.A.” A.6.2 Wordclass Names Wordclass names are alphanumeric strings that have previously been declared as wordclasses in one of the DIAL source files – either in the same module in which the wordclass is referred to or in an imported module, which exports the wordclass. A wordclass is a collection of words or phrases that have some common characteristic. By convention, wordclass names start with “wc.” Wordclass contents may be loaded dynamically from a file. The dynamic load option is useful for wordclasses that tend to change frequently – for example, a wordclass that contains names of companies. A wordclass that is not loaded dynam-ically requires recompilation of its module if it is changed, but a dynamically loaded wordclass does not. Wordclass members are tokenized automatically so that the text is matched against their tokenized versions. A.6 Pattern Elements 323 A wordclass member may be one of the following: ■A single all-letter token. ■Any string surrounded by quotes. Example: wordclass wcPersonTitle = “mr.” “mrs.” miss “dr.” president king queen con-gressman “prime minister”; Pattern Text Matching the Pattern WCPersonTitle “Mr.” “mr.” “Miss” A.6.3 Thesaurus Names Thesaurus names are alphanumeric strings that have previously been declared as thesaurus objects in one of the DIAL source files – either in the same module in which the thesaurus is referred to or in an imported module that exports the wordclass. A thesaurus is like a collection of several wordclasses. Within each class, the members are either synonyms of each other or serve some similar semantic purpose, as in wordclasses. Thesaurus members are tokenized automatically, and thus the text is matched against their tokenized versions. The thesaurus head members have a special status. Within a thesaurus class, the first member is called the “head” and is used as the identifier of the class; it may also be used to normalize instances of the class to a single display value (canonization). Thesauri may also be extended dynamically – that is, new classes and members of classes may be added to a thesaurus at run time. Thesaurus contents may be loaded dynamically from a file. Example: thesaurus thNationalities = {American ”U.S.” ”U.S.A.” ” United States” } {British English ”U.K.” ”United Kingdom” }; Pattern Text Matching the Pattern thNationalities “American” “u.s.a.” “English” 324 DIAL: A Dedicated Information Extraction Language for Text Mining A.6.4 Concept Names Concept names are case-sensitive alphanumeric strings that have been declared as concepts. The concepts used in concept patterns must either have already been declared in the current module or public concepts from one of its imported modules; otherwise, the current module will not compile. The ability of one concept to refer to another concept in its pattern enables the user to create very complex patterns in a concise and modular fashion. It also makes it possible to refer to concepts in modules developed by someone else without having to be aware of their code. Example: Suppose that you have defined the following concepts: ■Person – a concept for finding people. ■Position – a concept for finding professional positions, such as “CEO,” “general manager,” “team leader,” “vice president,” and so on. ■Company – a concept for finding names of companies. One could then write the following simple pattern for finding a person’s position in a company: Pattern Text Matching the Pattern Person “is employed as” “a”? Position “at” Company “James White is employed as a senior programmer at Software Inc.” “Harvey Banes is employed as vice president at MegaStorage” A.6.5 Character-Level Regular Expressions Character-levelregularexpressionsareusedrelativelyrarelyinmostDiscoveryMod-ules. They are used only when it is necessary to have control over the pattern at the character level. Most pattern elements can be matched only on whole units of tokens. Character-level regular expressions and character classes can find matches that con-tain parts of tokens. Character-level regular expressions have standard Perl-compliant syntax but must be surrounded by ‘regexp(“ ”)’ within the pattern. Here are some examples of metacharacters used in character-level regular expressions: Pattern Texts matching the pattern regexp(“.”) “a”, “?”, “” regexp(“Mc[A-Z][a-z]+”) “McDonald”, “McReady” regexp(“h2o(co4)?[hoc0–9]”) “h2oco4”, “h2och4o” A.7 Rule Constraints 325 A.6.6 Character Classes It is possible to define character classes, which are sets of character-level regular expressions. Character classes may appear as pattern items. Example: charclass ccScotchIrishLastname = {“Mc[A-Z][a-z]+”, “Mac[A-Z][a”z]+”, “O’[A-Z][a-z]+”, . . . }; Pattern Text Matching the Pattern ccScotchIrishLastname “McPhee” “McPhee” “O’Leary” A.6.7 Scanner Properties Scanner properties are attributes of a single token’s characters such as Capital (begins with a capital letter), Number (expresses a numeric value), Alpha (contains only alphabet characters), and so on. A.6.8 Token Elements The Token pattern element is used as a placeholder for any token. It is used when defining a pattern in which there may be one or more tokens whose value is unim-portant. Example: Pattern Text Matching the Pattern “He said: ‘“Token+”’ and she said” He said: ‘What time is it?’ and she said He said: ‘Miss Barker, please move my three-o’-clock meeting to 4:30’ and she said A.7 RULE CONSTRAINTS Sometimes, when writing a rule pattern, we find that the definition of the pattern is not precise enough on its own and that additional conditions on the pattern, which cannot be expressed by pattern elements and operators, must be added. Constraints 326 DIAL: A Dedicated Information Extraction Language for Text Mining are conditions on pattern variables. All the constraints must have a “true” value, or the actions section will not be performed – that is, the match will be disqualified even though the pattern has been found. A constraint may be composed of a single Boolean clause, the negation of such a clause (in the format NOT(. . .) or !(. . .)), or several clauses with “AND” and “OR” operators between them. The following is a simple example: Suppose we want a rule for a sequence of capital words of any length except 5. The condition “any length except 5” cannot be expressed in a single pattern. It can be expressed in a constraint, however, as follows: . . . pattern: Capital+; constraints: this−match.TokenCount() != 5; . . . A.7.1 Comparison Constraints The preceding example illustrates a comparison constraint. Comparison oper-ators in DIAL4 are as follows: ==, !=, <, <=, >, >= All comparison operators may be applied to numeric values. Only “==” (equal to)and“!=”(notequalto)maybeappliedtostringandphrasevalues.Nocomparison operators may be applied to concept and list values. Comparison operators may be used in “if” clauses as well as in constraints. A.7.2 Boolean Constraints If a constraint clause is a numeric value (e.g., “var.IsEmpty()” or “varNumber”), it will be considered false if the value is zero and true otherwise. A.8 CONCEPT GUARDS Guards may be applied to concept attributes when a rule attempts to add a concept instance to the SM. Only if all guard conditions are met will the instance be added and the actions section of the concept performed. All of the rule constraint syntax applies to concept guards as well. The differ-ence between them is that rule constraints are applied to pattern variables, whereas concept guards are applied to concept attribute values. Guards enable the concept to ensure conditions on its attribute values in a central location without having to add these conditions to each rule of the concept. A.9 Complete DIAL Examples 327 Example: concept Date { attributes: number nDay; number nMonth; number nYear; guards: (nDay >= 1) AND (nDay <= 31); (nMonth >= 1) AND (nMonth <=12); (nYear > 0); }; Actions are blocks of code operations to be performed after a pattern match, in the case of rule actions, or after adding a concept instance, in the case of concept actions. Concept actions are not mandatory and in most cases will not be used at all. Rule actions, however, are always used: If a rule has no actions, it will not add an instance to the SM even if it matches the text properly and therefore will have no effect on the Discovery Module output. The most important action a rule performs is to add a concept instance with its appropriate attribute values. It may also perform other actions. All actions are performed with or on variables. Rule actions may use pattern variables, local variables, and global variables. Concept actions may use attribute variables, local variables, and global variables. A.9 COMPLETE DIAL EXAMPLES A.9.1 Extracting People Names Based on Title/Position In the following code fragment we define two concepts. The first concept is Person-NameStruct, which simply looks for some variation of First Name, Middle Name, and Last Name. This concept is rather na¨ ıve because it does not enforce any constraints, and as a result the precision of extracting people names using the rules associated with this concept would be rather poor. The second concept is Person. This concept has the same pattern as the PersonNameStruct concept with the addition of some constraints. The constraints (which are explained in the code) considerably increase the precision of the concept. wordclass wcPosition = adviser minister spokesman president (vice president) general (gen.); /∗note that wordclass members are tokenized and entries containing multiple tokens should be enclosed within () ∗/ 328 DIAL: A Dedicated Information Extraction Language for Text Mining concept PersonNameStruct{ //we define this concept to //allow the code reuse attributes: string FirstName; string MiddleName; string LastName; }; wordclass wcNamePrefix = ben abu abed von al; /∗common prefixes of family names ∗/ rule PersonNameStruct { pattern: Capital -> first (Capital ”.”?)? -> middle ((wcNamePrefix ”-”?)? Capital) ->last; /∗the pattern looks for 3 elements, where the 2nd element (middle name) is optional, and the 3rd element may have an optional prefix ∗/ actions: Add(FirstName <- first.Text(), MiddleName <-middle.Text (), LastName <- last.Text( )); ∗/ add an instance of PersonNameStruct to the Shared Memory ∗/ }; rule Person { pattern: Capital -> first (MiddleName -> middle)? ((wcNamePrefix ”-”?)? Capital) ->last; constraints: (first IS−IN wcFirstNames) OR !(middle.IsEmpty()) OR (first {1 }AFTER wcPosition); !(first.FirstToken() IS−IN wcPosition); /∗The constraints filter out erroneous instances: Either first needs to be in the lexicon of first names, or that middle is not empty (since it is optional) or that the token preceding first was a position In addition, we make sure that the first token of first is not part of know position ∗/ actions: Add(FirstName <- first.Text(), MiddleName <- middle.Text(),LastName <-last.Text()); /∗add an instance of Person to the Shared Memory ∗/ }; If there are several rules to a concept, then the order between rules is impor-tant. The first rule will be applied before the second rule, and if it succeeds, it will block the pattern’s range such that no other rules can be applied to this text fragment. A.9 Complete DIAL Examples 329 A.9.2 Extracting Lists of People Names Based on a Preceding Verb We want to extract a list of people and feed them into a list variable called pList. The pattern is looking for one of a predefined set of verbs followed by “are” or “were” and then a list of people names separated by commas. The code extracts the list of people and then iterates over the list to create a new instance of Person in the shared memory for each member of the list. concept PersonsList{}; wordclass wcCriminalIndicatingVerbs = charged blamed arrested; wordclass wcPluralBe = are were; rule PersonsList { pattern: wcCriminalIndicatingVerbs wcPluralBe (PersonNameStruct->> pList ”,”?)+ ”and” PersonNameStruct ->> pList; /∗we are looking for a criminal verb followed by are or were, and then a list of people (separated by commas). plist will hold the list of people by using the->> (append to list operator)∗/ actions: iterate (pList) begin currPerson = pList.CurrentItem(); Add(Person, currPerson, FirstName <- currPerson.FirstName, LastName <- currPerson.LastName); End /∗ }; A.9.3 Using a Thesaurus to Extract Location Names In this example we look for country names in the text. If the country appears in the thesaurusitwillbereplacedbythecanonicentry;otherwise,itwillremainunchanged. The constraint makes sure that the entry contains a capital letter. thesaurus thsCountries; rule Location { pattern: wcCountries-> the−country; constraints: the−country CONTAINS Capital; actions: canonic−country = thsCountries.GetHead(the−country); 330 DIAL: A Dedicated Information Extraction Language for Text Mining if(canonic−country.IsEmpty()) Add(Location,this−match,”country”, the−country.Text()); else Add(Location,this−match,”country”, canonic−country); }; A.9.4 Creating a Thesaurus of Local People Names In this example we augment the definition of the person concept with the ability to add the name of the person to a local thesaurus. The thesaurus will contain all names of people in the document and then can be used for anaphora resolution (as in cases in which just the last name appears in the text after the full name of that person was mentioned earlier). In the action part of the rule we check if the person was added already to the thesaurus. If the petson’s name was still not added, a new entry is created with the full name as the leader (the canonic form) and three additional variations (first name and last name, last name alone, and first name alone). thesaurus thLocalPersons; concept Person { attributes: string FullName; string FirstName; string MiddleName; string LastName; actions: if (thLocalPersons.GetHead(FullName).IsEmpty()) begin thLocalPersons.AddSynonym(FullName.Text(), FullName); thLocalPersons.AddSynonym(FullName.Text(), FirstName + LastName); thLocalPersons.AddSynonym(FullName.Text(), LastName); thLocalPersons.AddSynonym(FullName.Text(), FirstName); end FullName = FullName.Text(); }; A.9.5 A Simplified Anaphora Resolution Rule for Resolving a Person’s Pronoun In this example we illustrate a sample anaphora resolution rule. The rule will be activated if we encounter a pronoun (Pron) whose type is person. We then look if A.9 Complete DIAL Examples 331 there is a person name in the previous sentence. If there is a person in the previous sentence, we resolve the pronoun to point to that person. Note that this is just one simple rule in the overall anaphora resolution solution. concept PersonAnaphora { attributes: string FullName; }; rule PersonAnaphora { pattern: Pron -> p; constraints: p.type == ”person”; actions: prevSentence = this−match.Previous(Sentence); prevPerson = prevSentence.Next(Person); if (!prevPerson.IsEmpty()) Add(prevPerson.FullName); }; A.9.6 Anaphoric Family Relation In the following example we show a DIAL rule for extracting a simple pattern of anaphoric FamilyRelation as in the following extract from a Pakistani newspaper (October 2002): PML(Q)’s provincial leader Amanullah Khan is contesting election from NA-17 Abbottabad-1 while his nephew Inayatullah Khan Jadoon is contesting from PF-45 under PML(N) ticket public concept FamilyRelation { attributes: string Person; string FamilyRelation; string Person−Relative; }; wordclass wcThirdPersonPossessPronoun = ”his” ”her”; wordclass wcFamilyRelation = ”father” ”mother” ”son” . . . ”nephew”; rule FamilyRelation { pattern: PersonAnaphora->pron //his wcFamilyRelation->relation //nephew wcComma? // optional comma PersonOrPersonDetails -> relative; 332 DIAL: A Dedicated Information Extraction Language for Text Mining //Inayatullah Khan //Jadoon constraints: //pron is a relevant pronoun pron IS−A wcThirdPersonPossessPronoun; //make sure that the antecedent (here: ”Amanullah Khan”) isn’t empty ! pron.Antecedent.IsEmpty(); //person is never a relative of himself ! pron.Antecedent! = relative.Person; actions: Add(Person<-pron.Antecedent,FamilyRelation<-Relation, Person−Relative<-relative.Person); }; The meaning of the rule above is as follows: ■Extract a FamilyRelation instance if the following sequence was matched: A pronoun resolved as PersonAnaphora (in earlier module) followed by a family relation noun, followed by an optional comma, and then an instance of Person-OrPersonDetails (a Person name or a noun phrase or an appositive clause with person name as head). ■Subject to the constraints: 1. The pronoun is a third person possessive pronoun (“his” or “her”). 2. A resolved person (for the pronoun) was found (i.e., it is not empty). 3. Theresolvedperson(towhichthepronounrelatesisnotequaltotheperson identified within the pattern (“relative”). ■Add one instance of FamilyRelation. The first attribute (Person) should be the pronoun refers to (pron.Antecedent). The second attribute should be the relation word (in the example above: “nephew”). The third attribute, Person Relative, should be the person found within the pattern itself (relative – here Inayatullah Khan Jadoon). A.9.7 Meeting between People We will demonstrate the implementation of a simple rule for the PersonMeeting concept, which stands for a meeting between two people. Consider the following extract from a publication of IRNA (official Iranian News Agency), October 2002: During his three-day stay in Iran, the Qatari official is to meet Interior min-ister Mousavi Lari, Majlis Speaker Mehdi Karroubi and First Vice-President Mohammad Reza Aref. We first present the concept’s prototype: A.9 Complete DIAL Examples 333 public concept PersonMeeting { attributes: string Person1; string Person2; string PersonDescription1; string PersonDescription2; string Organization; string MeetingStatus; string Date; }; Person1, Person2, PersonDescription1, PersonDescription2 and Organization are the participants in the meeting. We extract a different instance for every pair of people that met. This means that in the preceding example we extract one instance for the meeting of “the Qatari official” with “Mousavi Lari,” one instance for his meeting with “Mehdi Karroubi,” and one instance for his meeting with “Mohammad Reza Aref.” If the first party of the meeting is a name of a person (such as “Mousavi Lari”), then the Person1 attribute is filled. The same is true for Person2 (regarding the second party). Similarly, in cases in which we do not have a specific name but rather a descrip-tion such as “the Qatari official,” then PersonDescription1 (or PersonDescription2) is filled. The Organization attribute is filled in the case of a meeting (or a speech given) between a person and an organization. MeetingStatus may be “announced” (actual meeting has taken place) or “plan-ned” (future). Date is the meeting date (if available). The following rule implements the concept: wordclass wcCommaAnd = ”and” ”,” ”,and”; rule PersonMeeting { pattern: LONGEST( PersonDescription->the−nominal //“the Qatari official” ExtendedVGForMeeting->meeting−phrase //verb group: ”is to meet” (PersonOrPersonDetails->> meeting−list wcCommaAnd){0,3} //“Interior minister Mousavi Lari, Majlis // Speaker Mehdi Karroubi” ///last item: “First Vice-President Mohammad Reza Aref” PersonOrPersonDetails->> meeting−list 334 DIAL: A Dedicated Information Extraction Language for Text Mining ); actions: iterate(meeting−list) //Iterate on the items of the list //meeting−list begin //Create a separate instance for each //instance Add(PersonMeeting, meeting−list.CurrentItem(), PersonDescription1<-the−nominal.PersonDescription, Person1<-meeting− list.CurrentItem().Person.Text(), // the status of the meeting: ”announced” or // (as here): ”planned” (Found according to //the tense of the verb used) MeetingStatus<-meeting−phrase.Status ); end }; This rule demonstrates the usage of lists in DIAL. The “->>” operator concate-nates all the items to one list (here – first up to three elements in the beginning of the list and then the last item: “First Vice-President Mohammad Reza Aref.” Note that only the item itself is concatenated to the list, not the delimiters between the items (wcCommaAnd). The actions part of the rule demonstrates again the DIAL operator “iterate.” This operator allows going through each item of the list and performing, for each item, the required actions – in this case adding for each of the persons a separate instance of the PersonCommunications concept. Note that the person from the list is inserted to Person1, whereas PersonDescription1 is fixed for all items – the PersonDescription from the beginning of the pattern (Here: “the Qatari official”). Bibliography Abney, S. (1996). Partial Parsing via Finite-State Cascades. In Proceedings of Workshop on RobustParsing,8thEuropeanSummerSchoolinLogic,Language,andInformation.Prague, Czech Republic: 8–15. ACE (2004). Annotation Guidelines for Entity Detection and Tracking (EDT). ldc.upenn.edu/Projects/ACE/. Adam, C. K., Ng, H. T., and Chieu, H. L. (2002). Bayesian Online Classifiers for Text Classifi-cation and Filtering. In Proceedings of SIGIR-02, 25th ACM International Conference on Research and Development in Information Retrieval. Tampere, Finland, ACM Press, New York: 97–104. Adams, T. L., Dullea, J., Barrett, T. M., and Grubin, H. (2001). “Technology Issues Regarding the Evolution to a Semantic Web.” ISAS-SCI 1: 316–322. Aggarwal, C. C., Gates, S. C., and Yu, P. S. (1999). On the Merits of Building Categorization SystemsbySupervisedClustering.InProceedingsofEDBT-00,7thInternationalConference on Extending Database Technology. Konstanz, Germany, ACM Press, New York: 352– 356. Agrawal, R., Bayardo, R. J., and Srikant, R. (2000). Athena: Mining-based Interactive Man-agement of Text Databases. In Proceedings of EDBT-00, 7th International Conference on Extending Database Technulogy. Konstanz, Germany, Springer-Verlag, Heidelberg: 365– 379. Agrawal, R., Imielinski, T., and Swami, A. (1993). Mining Association Rules between Sets of Items in Large Databases. In Proceedings of the ACM SIGMOD Conference on Manage-ment of Data. Washington, DC, ACM Press, New York: 207–216. Agrawal, R., and Srikant, R. (1994). Fast Algorithms for Mining Association Rules. In Proceed-ings of the 20th International Conference on Very Large Databases (VLDB-94). Santiago, Chile, Morgan Kaufmann Publishers, San Francisco: 487–499. Agrawal, R., and Srikant, R. (1995). Mining Sequential Patterns. In Proceedings of the 11th International Conference on Data Engineering. Taipei, Taiwan, IEEE Press, Los Alamitos, CA: 3–14. Agrawal, R., and Srikant, R. (2001). On Integrating Catalogs. In Proceedings of WWW-01, 10th International Conference on the World Wide Web. Hong Kong, ACM Press, New York: 603–612. Ahlberg, C., and Schneiderman, B. (1994). Visual Information Seeking: Tight Coupling of Dynamic Query Filters with Starfield Displays. In Proceedings of the International Confer-ence on Computer-Human Interaction. Boston, ACM Press, New York: 313–317. 335 336 Bibliography Ahlberg, C., and Wistrand, E. (1995). IVEE: An Information Visualization and Exploration Environment. In Proceedings of Information Visualization ’95 Symposium. Atlanta, GA, IEEE, Los Alamitos, CA: 66–73. Aho, A., Hopcroft, J., and Ullman, J. (1983). Data Structures and Algorithms. Reading, MA, Addison-Wesley. Ahonen-Myka,H.(1999).FindingAllFrequentMaximalSetsinText.InProceedingsofthe16th International Conference on Machine Learning, ICML-99 Workshop on Machine Learning in Text Data Analysis. Ljubljana, AAAI Press, Menlo Park, CA: 1–9. Ahonen, H., Heinonen, O., Klemettinen, M., and Verkamo, A. (1997a). Applying Data Mining Techniques in Text Analysis. Helsinki, Department of Computer Science, University of Helsinki. Ahonen, H., Heinonen, O., Klemettinen, M., and Verkamo, A. (1997b). Mining in the Phrasal Frontier. In Proceedings of Principles of Knowledge Discovery in Databases Conference. Trondheim, Norway, Springer-Verlag, London. Aitken, J. S. (2002). Learning Information Extraction Rules: An Inductive Logic Programming Approach. In Proceedings of the 15th European Conference on Artificial Intelligence. Lyon, France, IOS Press, Amsterdam. Aizawa, A. (2000). The Feature Quantity: An Information-Theoretic Perspective of TFIDF-like Measures. In Proceedings of SIGIR-00, 23rd ACM International Conference on Research and Development in Information Retrieval. Athens, ACM Press, New York: 104–111. Aizawa, A. (2001). Linguistic Techniques to Improve the Performance of Automatic Text Cat-egorization. In Proceedings of NLPRS-01, 6th Natural Language Processing Pacific Rim Symposium. Tokyo, NLPRS, Tokyo: 307–314. Al-Kofahi, K., Tyrrell, A., Vachher, A., Travers, T., and Jackson, P . (2001). Combining Multiple Classifiers for Text Categorization. In Proceedings of CIKM-01, 10th ACM International Conference on Information and Knowledge Management. Atlanta, ACM Press, New York: 97–104. Albert, R., Jeong, H., and Barabasi, A.-L. (1999). “Diameter of the World-Wide Web.” Nature 401: 130–131. Alias, F., Iriondo, I., and Barnola, P. (2003). Multi-Domain Text Classification for Unit Selec-tion Text-to-Speech Synthesis. In Proceedings of ICPhS-03, 15th International Congress on Phonetic Sciences. Barcelona. Allen, J. (1995). Natural Language Understanding. Redwood City, CA, Benjamin Cummings. Amati, G., and Crestani, F. (1999). “Probabilistic Learning for Selective Dissemination of Information.” Information Processing and Management 35(5): 633–654. Amati, G., Crestani, F., and Ubaldini, F. (1997). A Learning System for Selective Dissemina-tion of Information. In Proceedings of IJCAI-97, 15th International Joint Conference on Artificial Intelligence. M. E. Pollack, ed. Nagoya, Japan, Morgan Kaufmann Publishers, San Francisco: 764–769. Amati, G., Crestani, F., Ubaldini, F., and Nardis, S. D. (1997). Probabilistic Learning for Infor-mation Filtering. In Proceedings of RIAO-97, 1st International Conference “Recherche d’Information Assist´ ee par Ordinateur.” Montreal: 513–530. Amati, G., D’Aloisi, D., Giannini, V., and Ubaldini, F. (1996). An Integrated System for Fil-tering News and Managing Distributed Data. In Proceedings of PAKM-96, 1st International Conference on Practical Aspects of Knowledge Management. Basel, Switzerland, Springer-Verlag, London. Amati, G., D’Aloisi, D., Giannini, V., and Ubaldini, F. (1997). “A Framework for Filtering News and Managing Distributed Data.” Journal of Universal Computer Science 3(8): 1007– 1021. Amir, A., Aumann, Y., Feldman, R., and Fresko, M. (2003). “Maximal Association Rules: A Tool for Mining Associations in Text.” Journal of Intelligent Information Systems 25(3): 333–345. Bibliography 337 Amir, A., Aumann, Y., Feldman, R., and Katz, O. (1997). Efficient Algorithm for Association Generation. Department of Computer Science, Bar-Ilan University. Anand, S. S., Bell, D. A., and Hughes, J. G. (1995). The Role of Domain Knowledge in Data Mining. In Proceedings of ACM CIKM’95. Baltimore, ACM Press, New York: 37–43. Anand, T., and Kahn, G. (1993). Opportunity Explorer: Navigating Large Databases Using Knowledge Discovery Templates. In Proceedings of the 1993 Workshop on Knowledge Dis-covery in Databases. Washington, DC, AAAI Press, Menlo Park, CA: 45–51. Androutsopoulos, I., Koutsias, J., Chandrinos, K. V., and Spyropoulos, C. D. (2000). An Exper-imental Comparison of Naive Bayesian and Keyword-Based Anti-Spam Filtering with Per-sonal E-mail Messages. In Proceedings of SIGIR-00, 23rd ACM International Conference on Research and Development in Information Retrieval. Athens, ACM Press, New York: 160–167. Aone, C., and Bennett, S. (1995). Evaluating Automated and Manual Acquisition of Anaphora Resolution Strategies. In Proceedings of Meeting of the Association for Computational Linguistics. Cambridge, MA, Association for Computational Linguistics, Morristown, NJ: 122–129. Appelt, D., Hobbs, J., Bear, J., Israel, D., Kameyama, M., Kehler, A., Martin, D., Meyers, K., and Tyson, M. (1993). SRI International FASTUS System: MUC-6 Test Results and Analysis. In Proceedings of 16th MUC. Columbia, MD, Association for Computational Linguistics, Morristown, NJ: 237–248. Appelt, D., Hobbs, J., Bear, J., Israel, D., Kameyama, M., and Tyson, M. (1993). FASTUS: A Finite-State Processor for Information Extraction from Real-World Text. In Proceedings of the 13th International Conference on Artificial Intelligence (IJCAI). Chambery, France, Morgan Kaufmann Publishers, San Mateo, CA: 1172–1178. Appiani, E., Cesarini, F., Colla, A., Diligenti, M., Gori, M., Marinai, S., and Soda, G. (2001). “Automatic Document Classification and Indexing in High-Volume Applications.” Interna-tional Journal on Document Analysis and Recognition 4(2): 69–83. Apte, C., Damerau, F., and Weiss, S. (1994a). Towards Language Independent Automated Learning of Text Categorization Models. In Proceedings of ACM-SIGIR Conference on Information Retrieval. Dublin, Springer-Verlag, New York: 23–30. Apte, C., Damerau, F. J., and Weiss, S. M. (1994b). “Automated Learning of Decision Rules for Text Categorization.” ACM Transactions on Information Systems 12(3): 233–251. Apte, C., Damerau, F. J., and Weiss, S. M. (1994c). Towards Language-Independent Auto-mated Learning of Text Categorization Models. In Proceedings of SIGIR-94, 17th ACM International Conference on Research and Development in Information Retrieval. Dublin, Springer-Verlag, Heidelberg: 23–30. Arning, A., Agrawal, R., and Raghavan, P. (1996). A Linear Method for Deviation Detection in Large Databases. In Proceedings of the 2nd International Conference on Knowledge Discovery in Databases and Data Mining. Portland, OR, AAAI Press, Menlo Park, CA: 164–169. Ashish, N., and Knoblock, C. A. (1997). Semi-Automatic Wrapper Generation for Internet Information Sources. In the Proceedings of the 2nd IFCIS International Conference on Cooperative Information Systems. Charleston, SC, IEEE Press, Los Alamitos, CA: 160–169. Attardi, G., Gulli, A., and Sebastiani, F. (1999). Automatic Web Page Categorization by Link and Context Analysis. In Proceedings of THAI-99, 1st European Symposium on Telematics, Hypermedia and Artificial Intelligence. Varese, Italy: 105–119. Attardi, G., Marco, S. D., and Salvi, D. (1998). “Categorization by Context.” Journal of Uni-versal Computer Science 4(9): 719–736. Aumann, Y., Feldman, R., Yehuda, Y., Landau, D., Liphstat, O., and Schler, Y. (1999). Circle Graphs: New Visualization Tools for Text-Mining. In Proceedings of the 3rd European Conference on Principles and Practice of Knowledge Discovery in Databases, (PKDD-99). Prague, Czech Republic, Springer-Verlag, London: 277–282. 338 Bibliography Avancini,H.,Lavelli,A.,Magnini,B.,Sebastiani,F.,andZanoli,R.(2003).ExpandingDomain-Specific Lexicons by Term Categorization. In Proceedings of SAC-03, 18th ACM Symposium on Applied Computing. Melbourne, FL, ACM Press, New York: 793–797. Azzam, S., Humphreys, K., and Gaizauskas, R. (1998). Evaluating a Focus-Based Approach to Anaphora Resolution. In Proceedings of the 36th Annual Meeting of the Association for Computational Linguistics and 17th International Conference on Computational Linguis-tics. Quebec, Morgan Kaufmann Publishers, San Francisco: 74–78. Backer, F. B., and Hubert, L. G. (1976). “A Graphtheoretic Approach to Goodness-of-Fit in Complete-Link Hierarchical Clustering.” Journal of the American Statistical Association 71: 870–878. Baeza-Yates, R., and Ribeira-Neto, B. (1999). Modern Information Retrieval. New York, ACM Press. Bagga, A., and Biermann, A. W. (2000). A Methodology for Cross-Document Coreference. In Proceedings of the 5th Joint Conference on Information Sciences (JCIS 2000). Atlantic City, NJ: 207–210. Bairoch, A., and Apweiler, R. (2000). “The Swiss-Prot Protein Synthesis Database and Its Supplement TrEMBL in 2000.” Nucleic Acids Research 28: 45–48. Baker, L. D., and McCallum, A. K. (1998). Distributional Clustering of Words for Text Clas-sification. In Proceedings of SIGIR-98, 21st ACM International Conference on Research and Development in Information Retrieval. Melbourne, Australia, ACM Press, New York: 96–103. Baldwin, B. (1995). CogNIAC: A Discourse Processing Engine. Ph.D. thesis, Department of Computer and Information Sciences, University of Pennsylvania. Baluja, S., Mittal, V. O., and Sukthankar, R. (2000). “Applying Machine Learning for High-Performance Named-Entity Extraction.” Computational Intelligence 16(4): 586–596. Bao, Y., Aoyama, S., Du, X., Yamada, K., and Ishii, N. (2001). A Rough Set-Based Hybrid Method to Text Categorization. In Proceedings of WISE-01, 2nd International Conference on Web Information Systems Engineering. Kyoto, Japan, IEEE Computer Society Press, Los Alamitos, CA: 254–261. Bapst, F., and Ingold, R. (1998). “Using Typography in Document Image Analysis.” Lecture Notes in Computer Science 1375: 240–260. Barbu, C., and Mitkov, R. (2001). Evaluation Tool for Rule-Based Anaphora Resolution Meth-ods. In Proceedings of Meeting of the Association for Computational Linguistics. Toulouse, France, Morgan Kaufmann Publishers, San Mateo, CA: 34–41. Basili, R., and Moschitti, A. (2001). A Robust Model for Intelligent Text Classification. In Proceedings of ICTAI-01, 13th IEEE International Conference on Tools with Artificial Intelligence. Dallas, IEEE Computer Society Press, Los Alamitos, CA: 265–272. Basili, R., Moschitti, A., and Pazienza, M. T. (2000). Language-Sensitive Text Classification. In Proceedings of RIAO-00, 6th International Conference “Recherche d’Information Assist´ ee par Ordinateur.” Paris: 331–343. Basili, R., Moschitti, A., and Pazienza, M. T. (2001a). An Hybrid Approach to Optimize Feature Selection Process in Text Classification. In Proceedings of AIIA-01, 7th Congress of the Italian Association for Artificial Intelligence. F. Esposito, ed. Bari, Italy, Springer-Verlag, Heidelberg: 320–325. Basili, R., Moschitti, A., and Pazienza, M. T. (2001b). NLP-Driven IR: Evaluating Perfor-mances over a Text Classification Task. In Proceedings of IJCAI-01, 17th International Joint Conference on Artificial Intelligence. B. Nebel, ed. Seattle, IJCAI, Menlo Park, CA: 1286– 1291. Basu, S., Mooney, R., Pasupuleti, K., and Ghosh, J. (2001). Evaluating the Novelty of Text-Mined Rules Using Lexical Knowledge. In Proceedings of the 7th International Conference on Knowledge Discovery and Data Mining (KDD-01). San Francisco, CA, ACM Press, New York: 233–239. Batagelj, V. (1997). “Notes on Blockmodeling.” Social Networks 19: 143–155. Bibliography 339 Batagalj, V., Doreian, P., and Ferligoj, A. (1992). “An Optimization Approach to Regular E-quivalence.” Social Networks 14: 121–135. Batagelj, V., Ferligoj, A., and Doreian, P. (1999). “Generalized Blockmodeling.” Informatica 23: 501–506. Batagelj, V., and Mrvar, A. (2003). Pajek – Analysis and Visualization of Large Networks. Graph Drawing Software. Springer-Verlag, Berlin. Batagelj, V., Mrvar, A., and Zaversnik, M. (1999). Partitioning Approach to Visualization of Large Networks. Graph Drawing ’99. Castle Stirin, Czech Republic. Batagelj, V., and Zaversnik, M. (2001). Cores Decomposition of Networks. Presented at Recent Trends in Graph Theory, Algebraic Combinatorics, and Graph Algorithms. Bled, Slovenia. Bayer, T., Kressel, U., Mogg-Schneider, H., and Renz, I. (1998). “Categorizing Paper docu-ments. A Generic System for Domain and Language-Independent Text Categorization.” Computer Vision and Image Understanding 70(3): 299–306. Becker, B. (1998). Visualizing Decision Table Classifiers. In Proceedings of IEEE Information Visualization (InfoVis ’98). North Carolina, IEEE Computer Society Press, Washington, DC: 102–105. Beeferman, D., Berger, A., and Lafferty, J. D. (1999). “Statistical Models for Text Segmenta-tion.” Machine Learning 34(1–3): 177–210. Beil, F., and Ester, M. (2002). Frequent Term-Based Text Clustering. In Proceedings of the 8th International Conference on Knowledge Discovery and Data Mining (KDD) 2002. Edmonton, Canada, ACM Press, New York: 436–442. Bekkerman, R., El-Yaniv, R., Tishby, N., and Winter, Y. (2001). On Feature Distributional Clustering for Text Categorization. In Proceedings of SIGIR-01, 24th ACM International Conference on Research and Development in Information Retrieval. New Orleans, ACM Press, New York: 146–153. Bel, N., Koster, C. H., and Villegas, M. (2003). Cross-Lingual Text Categorization. In Proceed-ings of ECDL-03, 7th European Conference on Research and Advanced Technology for Digital Libraries. Trodheim, Norway, Springer-Verlag, Heidelberg: 126–139. Benkhalifa, M., Bensaid, A., and Mouradi, A. (1999). Text Categorization Using the Semi-Supervised Fuzzy C-means Algorithm. In Proceedings of NAFIPS-99, 18th International Conference of the North American Fuzzy Information Processing Society. New York, IEEE Press, New York: 561–565. Benkhalifa, M., Mouradi, A., and Bouyakhf, H. (2001a). “Integrating External Knowledge to Supplement Training Data in Semi-Supervised Learning for Text Categorization.” Infor-mation Retrieval 4(2): 91–113. Benkhalifa, M., Mouradi, A., and Bouyakhf, H. (2001b). “Integrating WordNet Knowledge to Supplement Training Data in Semi-Supervised Agglomerative Hierarchical Clustering for Text Categorization.” International Journal of Intelligent Systems 16(8): 929–947. Bennett, P. N. (2003). Using Asymmetric Distributions to Improve Text Classifier Probability Estimates. In Proceedings of SIGIR-03, 26th ACM International Conference on Research and Development in Information Retrieval. Toronto, ACM Press, New York: 111–118. Bennett, P. N., Dumais, S. T., and Horvitz, E. (2002). Probabilistic Combination of Text Clas-sifiers Using Reliability Indicators: Models and Results. In Proceedings of SIGIR-02, 25th ACM International Conference on Research and Development in Information Retrieval. Tampere, Finland, ACM Press, New York: 207–214. Berendt, B., Hotho, A., and Stumme, G. (2002). Towards Semantic Web Mining. In Proceedings of the International Semantic Web Conference (ISWC02). Sardinia, Italy, Springer, Berlin/ Heidelberg: 264–278. Berners-Lee, T., Hendler, J., and Lassila, O. (2001). “The Semantic Web.” Scientific American, May 2001. Berry, M. (1992). “Large-Scale Sparse Singular Value Computations.” International Journal of Supercomputer Applications. 6(1): 13–49. 340 Bibliography Bettini, C., Wang, X., and Jojodia, S. (1996). Testing Complex Temporal Relationships Involving Multiple Granularities and Its Application to Data Mining. In Proceedings of the 15th ACM SIGACT-SIGMOD-SIGART Symposium on Principles of Database Systems (PODS-96). Montreal, Canada, ACM Press, New York: 68–78. Biebricher,P.,Fuhr,N.,Knorz,G.,Lustig,G.,andSchwantner,M.(1988).TheAutomaticIndex-ing System AIR/PHYS. From Research to Application. In Proceedings of SIGIR-88, 11th ACM International Conference on Research and Development in Information Retrieval. Y. Chiaramella, ed. Grenoble, France, ACM Press, New York: 333–342. Bigi, B. (2003). Using Kullback–Leibler Distance for Text Categorization. In Proceedings of ECIR-03, 25th European Conference on Information Retrieval. F. Sebastiani, ed. Pisa, Italy, Springer-Verlag, Berlin/Heidelberg: 305–319. Bikel, D. M., Miller, S., Schwartz, R., and Weischedel, R. (1997). Nymble: A High-Performance Learning Name-Finder. In Proceedings of ANLP-97. Washington, DC, Morgan Kaufmann Publishers, San Francisco: 194–201. Bikel, D. M., Schwartz, R. L., and Weischedel, R. M. (1999). “An Algorithm that Learns What’s in a Name.” Machine Learning 34(1–3): 211–231. Blake, C., and Pratt, W. (2001). Better Rules, Fewer Features: A Semantic Approach to Selecting Features from Text. In Proceedings of the 2001 IEEE International Conference on Data Mining. San Jose, CA, IEEE Computer Society Press, New York: 59–66. Blanchard, J., Guillet, F., and Briand, H. (2003). Exploratory Visualization for Association Rule Rummaging. In Proceedings of the 4th International Workshop on Multimedia Data Mining MDM/KDD2003. Washington, DC, ACM Press, New York: 107–114. Blei, D. M., Ng, A. Y., and Jordan, M. I. (2003). “Latent Dirichlet Allocation.” Journal of Machine Learning Research 3: 993–1022. Bloedorn, E., and Michalski, R. S. (1998). “Data-Driven Constructive Induction.” IEEE Intel-ligent Systems 13(2): 30–37. Blosseville, M. J., Hebrail, G., Montell, M. G., and Penot, N. (1992). Automatic Document Classification: Natural Langage Processing and Expert System Techniques Used Together. In Proceedings of SIGIR-92, 15th ACM International Conference on Research and Develop-ment in Information Retrieval. Copenhagen, ACM Press, New York: 51–57. Blum, A., and Mitchell, T. M. (1998). Combining Labeled and Unlabeled Data with Co-Training. COLT. Madison, WI, ACM Press, New York: 92–100. Bod, R., and Kaplan, R. (1998). A Probabilistic Corpus-Driven Model for Lexical-Functional Analysis. In Proceedings of the 36th Annual Meeting of the Association for Computational Linguistics and 17th International Conference on Computational Linguistics. Montreal, Morgan Kaufmann Publishers, San Francisco: 145–151. Bonacich, P. (1972). “Factoring and Weighting Approaches to Status Scores and Clique Iden-tification.” Journal of Mathematical Sociology 2: 113–120. Bonacich, P. (1987). “Power and Centrality: A Family of Measures.” American Journal of Sociology 92: 1170–1182. Bonnema, R., Bod, R., and Scha, R. (1997). A DOP Model for Semantic Interpretation. In Pro-ceedings of the 35th Annual Meeting of the Association for Computational Linguistics and 8th Conference of the European Chapter of the Association for Computational Linguistics. Somerset, NJ, Morgan Kaufmann Publishers, San Francisco: 159–167. Borgatti, S. P., and Everett, M. G. (1992). “Notions of Positions in Social Network Analysis.” In Sociological Methodology, P. V. Marsden, ed. San Francisco, Jossey Bass: 1–35. Borgatti, S. P., and Everett, M. G. (1993). “Two Algorithms for Computing Regular Equiva-lence.” Social Networks 15: 361–376. Borgatti, S. P., Everett, M. G., and Freeman, L. C. (2002). Ucinet 6 for Windows, Cambridge, MA, Harvard: Analytic Technologies. Borko, H., and Bernick, M. (1963). “Automatic Document Classification.” Journal of the Association for Computing Machinery 10(2): 151–161. Borko, H., and Bernick, M. (1964). “Automatic Document Classification. Part II: Additional Experiments.” Journal of the Association for Computing Machinery 11(2): 138–151. Bibliography 341 Borner, K., Chen, C., and Boyack, K. (2003). “Visualizing Knowledge Domains.” Annual Review of Information Science and Technology 37: 179–255. Borthwick, A. (1999). A Maximum Entropy Approach for Named Entity Recognition. Com-puter Science Department, New York University. Brachman, R., and Anand, T. (1996). In “The Process of Knowledge Discovery in Databases: A Human Centered Approach.” Advances in Knowledge Discovery and Data Mining. U. M. Fayyad, G. Piatetsky-Shapiro, P . Smyth, and R. Uthurusamy, eds. Menlo Park, CA, AAAI Press and MIT Press: 37–58. Brachman, R., Selfridge, P., Terveen, L., Altman, B., Borgida, A., Halper, F., Kirk, T., Lazar, A., McGuinness, D., and Resnick, L. (1993). “Integrated Support for Data Archeology.” International Journal of Intelligent and Cooperative Information Systems. 2(2): 159–185. Bradley, P. S., Fayyad, U., and Reina, C. (1998). Scaling Clustering Algorithms to Large Databases. In Proceedings of the Knowledge Discovery and Data Mining Conference (KDD ’98). New York, AAAI Press, Menlo Park, CA: 9–15. Brank, J., Grobelnik, M., Milic-Frayling, N., and Mladenic, D. (2002). Feature Selection Using Support Vector Machines. In Proceedings of the 3rd International Conference on Data Mining Methods and Databases for Engineering, Finance, and Other Fields. Bologna, Italy. Brill, E. (1992). A Simple Rule-Based Part of Speech Tagger. In Proceedings of the 3rd Annual Conference on Applied Natural Language Processing. Trento, Italy, Morgan Kaufmann Publishers, San Francisco: 152–155. Brill, E. (1995). “Transformation-Based Error-Driven Learning and Natural Language Processing: A Case Study in Part-of-Speech Tagging.” Computational Linguistics 21(4): 543–565. Brin, S. (1998). Extracting Patterns and Relations from the World Wide Web. In Proceedings of WebDB Workshop, EDBT ’98. Valencia, Spain, Springer, Berlin: 172–183. Brown, R. D. (1999). Adding Linguistic Knowledge to a Lexical Example-Based Translation System. In Proceedings of the 8th International Conference on Theoretical and Method-ological Issues in Machine Translation (TMI-99). Chester, UK: 22–32. Bruckner, T. (1997). The Text Categorization System TEKLIS at TREC-6. In Proceedings of TREC-6, 6th Text Retrieval Conference. Gaithersburg, MD, National Institute of Standards and Technology, Gaithersburg, MD: 619–621. Cai, L., and Hofmann, T. (2003). Text Categorization by Boosting Automatically Extracted Concepts. In Proceedings of SIGIR-03, 26th ACM International Conference on Research and Development in Information Retrieval. Toronto, ACM Press, New York: 182–189. Caldon, P. (2003). Using Text Classification to Predict the Gene Knockout Behaviour of S. Cerevisiae. In Proceedings of APBC-03, 1st Asia-Pacific Bioinformatics Conference. Y.-P. P. Chen, ed. Adelaide, Australia, Australian Computer Society: 211–214. Califf, M. E., and Mooney, R. J. (1998). Relational Learning of Pattern-Match Rules for Infor-mation Extraction. In Working Notes of AAAI Spring Symposium on Applying Machine Learning to Discourse Processing. Menlo Park, CA, AAAI Press, Palo Alto, CA: 6–11. Carbonell, J., Cohen, W. W., and Yang, Y. (2000). “Guest Editors’ Introduction to the Special Issue on Machine Learning and Information Retrieval.” Machine Learning 39(2/3): 99–101. Card, S., MacKinlay, J., and Shneiderman, B. (1998). Readings in Information Visualization: Using Vision to Think. San Francisco, Morgan Kaufmann Publishers. Cardie, C. (1994). Domain Specific Knowledge Acquisition for Conceptual Sentence Analysis. Department of Computer Science, University of Massachusetts, Amherst, MA. Cardie, C. (1995). “Embedded Machine Learning Systems for Natural Language Processing: A General Framework.” In Connectionist, Statistical and Symbolic Approaches to Learning for Natural Language Processing. S. Wermter, E. Riloff, and G. Scheler, eds. Berlin, Springer: 315–328. Cardie, C. (1997). “Empirical Methods in Information Extraction.” AI Magazine 18(4): 65–80. Cardie, C. (1999). “Integrating Case-Based Learning and Cognitive Biases for Machine Learn-ing of Natural Language.” JETAI 11(3): 297–337. 342 Bibliography Cardie, C., and Howe, N. (1997). Improving Minority Class Prediction Using Case-Specific Feature Weights. In Proceedings of 14th International Conference on Machine Learning. Nashville, TN, Morgan Kaufmann Publishers, San Francisco: 57–65. Cardoso-Cachopo, A., and Oliveira, A. L. (2003). An Empirical Comparison of Text Cate-gorization Methods. In Proceedings of SPIRE-03, 10th International Symposium on String Processing and Information Retrieval. Manaus, Brazil, Springer-Verlag, Heidelberg: 183– 196. Carlis, J., and Konstan, J. (1998). Interactive Visualization of Serial Periodic Data. In Proceed-ings of the 11th Annual Symposium on User Interface Software and Technology (UIST ’98). San Francisco, ACM Press, New York: 29–38. Caropreso, M. F., Matwin, S., and Sebastiani, F. (2001). “A Learner-Independent Evaluation of the Usefulness of Statistical Phrases for Automated Text Categorization.” In Text Databases and Document Management: Theory and Practice. A. G. Chin, ed. Hershey, PA, Idea Group Publishing: 78–102. Carpineto, C., and Romano, G. (1996). “Information Retrieval through Hybrid Navigation of Lattice Representations.” International Journal of Human-Computer Studies 45(5): 553– 578. Carreras, X., and Marquez, L. (2001). Boosting Trees for Anti-Spam Email Filtering. In Pro-ceedings of RANLP-01, 4th International Conference on Recent Advances in Natural Lan-guage Processing. Tzigov Chark, Bulgaria. Carroll, G., and Charniak, E. (1992). Two Experiments on Learning Probabilistic Dependency Grammars from Corpora. Technical Report CS-92-16. Cattoni, R., Coianiz, T., Messelodi, S., and Modena, C. (1998). Geometric Layout Analysis Techniques for Document Image Understanding: A Review. Technical Report. Trento, Italy, ITC-IRST I-38050. Cavnar, W. B., and Trenkle, J. M. (1994). N-Gram-Based Text Categorization. In Proceedings of SDAIR-94, 3rd Annual Symposium on Document Analysis and Information Retrieval. Las Vegas, UNLV Publications/Reprographics, Las Vegas: 161–175. Ceci, M., and Malerba, D. (2003). Hierarchical Classification of HTML Documents with Web-ClassII. In Proceedings of ECIR-03, 25th European Conference on Information Retrieval. F. Sebastiani, ed. Pisa, Italy, Springer-Verlag, Berlin: 57–72. Cerny, B. A., Okseniuk, A., and Lawrence, J. D. (1983). A Fuzzy Measure of Agreement between Machine and Manual Assignment of Documents to Subject Categories. In Pro-ceedings of ASIS-83, 46th Annual Meeting of the American Society for Information Science. Washington, DC, American Society for Information Science, Washington, DC: 265. Chai, K. M., Ng, H. T., and Chieu, H. L. (2002). Bayesian Online Classifiers for Text Classifi-cation and Filtering. In Proceedings of SIGIR-02, 25th ACM International Conference on Research and Development in Information Retrieval. Tampere, FI, ACM Press, New York: 97–104. Chakrabarti, S., Dom, B. E., Agrawal, R., and Raghavan, P . (1997). Using Taxonomy, Discrim-inants, and Signatures for Navigating in Text Databases. In Proceedings of VLDB-97, 23rd International Conference on Very Large Data Bases. Athens, Morgan Kaufmann Publish-ers, San Francisco: 446–455. Chakrabarti, S., Dom, B. E., Agrawal, R., and Raghavan, P . (1998). “Scalable Feature Selec-tion, Classification and Signature Generation for Organizing Large Text Databases into Hierarchical Topic Taxonomies.” Journal of Very Large Data Bases 7(3): 163–178. Chakrabarti, S., Dom, B. E., and Indyk, P. (1998). Enhanced Hypertext Categorization Using Hyperlinks. In Proceedings of SIGMOD-98, ACM International Conference on Manage-ment of Data. Seattle, ACM Press, New York: 307–318. Chakrabarti, S., Dom, B. E., Kumar, S. R., Raghavan, P ., Rajagopalan, S., Tomkins, A., Gibson, D., and Kleinberg, J. (1999). “Mining the Web’s Link Structure.” IEEE Computer 32(8): 60–67. Bibliography 343 Chakrabarti, S., Roy, S., and Soundalgekar, M. (2002). Fast and Accurate Text Classification via Multiple Linear Discriminant Projections. In Proceedings of VLDB-02, 28th International Conference on Very Large Data Bases. Hong Kong: 658–669. Chalmers, M., and Chitson, P. (1992). Bead: Exploration in Information Visualization. In Pro-ceedings of the 15th Annual ACM/SIGIR Conference. Copenhagen, ACM Press, New York: 330–337. Chandrinos, K. V., Androutsopoulos, I., Paliouras, G., and Spyropoulos, C. D. (2000). Auto-matic Web Rating: Filtering Obscene Content on the Web. In Proceedings of ECDL-00, 4th European Conference on Research and Advanced Technology for Digital Libraries. Lisbon, Springer-Verlag, Heidelberg: 403–406. Chang, S.-J., and Rice, R. (1993). “Browsing: A Multidimensional Framework.” Annual Review of Information Science and Technology 28: 231–276. Charniak, E. (1993). Statistical Language Learning. Cambridge, MA, MIT Press. Charniak, E. (2000). A Maximum-Entropy-Inspired Parser. In Proceedings of the Meeting of the North American Association for Computational Linguistics. Seattle, ACM Press, New York: 132–139. Chen, C. (2002). “Visualization of Knowledge Structures.” In Handbook of Software Engineer-ing and Knowledge Engineering. S. K. Chang, ed. River Edge, NJ, World Scientific Publishing Co.: 201–238. Chen, C., and Paul, R. (2001). “Visualizing a Knowledge Domain’s Intellectual Structure.” Computer 34(3): 65–71. Chen, C. C., Chen, M. C., and Sun, Y. (2001). PVA: A Self-Adaptive Personal View Agent. In Proceedings of KDD-01, 7th ACM SIGKDD International Conferece on Knowledge Discovery and Data Mining. San Francisco, ACM Press, New York: 257–262. Chen, C. C., Chen, M. C., and Sun, Y. (2002). “PVA: A Self-Adaptive Personal View Agent.” Journal of Intelligent Information Systems 18(2/3): 173–194. Chen, H., and Dumais, S. T. (2000). Bringing Order to the Web: Automatically Categoriz-ing Search Results. In Proceedings of CHI-00, ACM International Conference on Human Factors in Computing Systems. The Hague, ACM Press, New York: 145–152. Chen, H., and Ho, T. K. (2000). Evaluation of Decision Forests on Text Categorization. In Proceedings of the 7th SPIE Conference on Document Recognition and Retrieval. San Jose, CA, SPIE – The International Society for Optical Engineering, Bellingham, WA: 191– 199. Chenevoy,Y.,andBela’id,A.(1991).HypothesisManagementforStructuredDocumentRecog-nition. In Proceedings of the 1st International Conference on Document Analysis and Recognition (ICDAR’91). St.-Malo, France: 121–129. Cheng, C.-H., Tang, J., Wai-Chee, A., and King, I. (2001). Hierarchical Classification of Doc-uments with Error Control. In Proceedings of PAKDD-01, 5th Pacific-Asia Conferenece on Knowledge Discovery and Data Mining. Hong Kong, Springer-Verlag, Heidelberg: 433–443. Cheung, D. W., Han, J., Ng, V. T., and Wong, C. Y. (1996). Maintenance of Discovered Associ-ation Rules in Large Databases: An Incremental Updating Technique. In Proceedings of the 12th ICDE, New Orleans, IEEE Computer Society Press, Los Alamitos, CA: 106–114. Cheung, D. W., Lee, S. D., and Kao, B. (1997). A General Incremental Technique for Maintaining Discovered Association Rules. In Proceedings of the International Conference on Database Systems for Advanced Applications (DASFAA). Melbourne, Australia: 185–194. Chinchor, N., Hirschman, L., and Lewis, D. (1994). “Evaluating Message Understanding Systems: An Analysis of the Third Message Understanding Conference (MUC-3).” Com-putational Linguistics 3(19): 409–449. Chouchoulas, A., and Shen, Q. (2001). “Rough Set-Aided Keyword Reduction for Text Cat-egorization.” Applied Artificial Intelligence 15(9): 843–873. Chuang, W. T., Tiyyagura, A., Yang, J., and Giuffrida, G. (2000). A Fast Algorithm for Hier-archical Text Classification. In Proceedings of DaWaK-00, 2nd International Conference 344 Bibliography on Data Warehousing and Knowledge Discovery. London, Springer-Verlag, Heidelberg: 409–418. Ciravegna, F. (2001). Adaptive Information Extraction from Text by Rule Induction and Gen-eralization. In Proceedings of the 17th IJCAI. Seattle, Morgan Kaufmann Publishers, San Francisco: 1251–1256. Ciravegna, F., Lavelli, A., Mana, N., Matiasek, J., Gilardoni, L., Mazza, S., Black, W. J., and Rinaldi, F. (1999). FACILE: Classifying Texts Integrating Pattern Matching and Information Extraction. In Proceedings of IJCAI-99, 16th International Joint Conference on Artificial Intelligence. T. Dean, ed. Stockholm, Morgan Kaufmann Publishers, San Francisco: 890– 895. Clack, C., Farringdon, J., Lidwell, P., and Yu, T. (1997). Autonomous Document Classification for Business. In Proceedings of the 1st International Conference on Autonomous Agents. W. L. Johnson, ed. Marina Del Rey, CA, ACM Press, New York: 201–208. Cleveland, W. S. (1994). The Elements of Graphing Data. Summit, NJ, Hobart Press. Clifton, C., and Cooley, R. (1999). TopCat: Data Mining for Topic Identification in a Text Cor-pus. In Proceedings of the 3rd European Conference on Principles of Knowledge Discovery and Data Mining. Prague, Springer, Berlin: 174–183. Cockburn, A. (2004). Revisiting 2D vs 3D Implications on Spatial Memory. In Proceedings of the 5th Conference on Australasian User Interface, Volume 28. Dunedin, New Zealand, Australian Computer Society, Inc.: 25–31. Cohen, W., and Singer, Y. (1996). Context-Sensitive Learning Methods for Text Categorization. In Proceedings of SIGIR-96, 19th ACM. International Conference on Research and Devel-opment in Information Retrieval. H.-P Frei, D. Harman, P . Schauble and R. Wilkinson, eds. Zurick, Switzerland, ACM Press, New York, 307–315. Cohen, W. W. (1992). Compiling Prior Knowledge into an Explicit Bias. In Proceedings of the 9th International Workshop on Machine Learning. D. Sleeman and P . Edwards, eds. Morgan Kaufmann Publishers, San Francisco: 102–110. Cohen, W. W. (1995a). “Learning to Classify English Text with ILP Methods.” In Advances in Inductive Logic Programming. L. D. Raedt, ed. Amsterdam, IOS Press: 124–143. Cohen, W. W. (1995b). Text Categorization and Relational Learning. In Proceedings of ICML-95, 12th International Conference on Machine Learning. Lake Tahoe, NV, Morgan Kauf-mann Publishers, San Francisco: 124–132. Cohen, W. W., and Hirsh, H. (1998). Joins that Generalize: Text Classification Using Whirl. In Proceedings of KDD-98, 4th International Conference on Knowledge Discovery and Data Mining. New York, AAAI Press, Menlo Park, CA: 169–173. Cohen, W. W., and Singer, Y. (1996). Context-Sensitive Learning Methods for Text Catego-rization. In Proceedings of SIGIR-96, 19th ACM International Conference on Research and Development in Information Retrieval. Zurich, ACM Press, New York: 307– 315. Cohen, W. W., and Singer, Y. (1999). “Context-Sensitive Learning Methods for Text Catego-rization.” ACM Transactions on Information Systems 17(2): 141–173. Collins, M. (1997). Three Generative, Lexicalized Models for Statistical Parsing. In Proceedings of the 35th Annual Meeting of the Association for Computational Linguistics. Madrid, ACM Press, New York: 16–23. Collins, M., and Miller, S. (1998). Semantic Tagging Using a Probabilistic Context Free Gram-mar. In Proceedings of the 6th Workshop on Very Large Corpora. Montreal, Morgan Kaufmann Publishers, San Francisco: 38–48. Cooper, J. (1997). What Is Lexical Navigation? IBM Thomas J. Watson Research Center. navigation.htm. Cover, T. M., and Thomas, J. A. (1991). Elements of Information Theory. New York, John Wiley and Sons. Cowie, J., and Lehnert, W. (1996). “Information Extraction.” Communications of the Associ-ation of Computing Machinery 39(1): 80–91. Bibliography 345 Crammer, K., and Singer, Y. (2002). A New Family of Online Algorithms for Category Ranking. In Proceedings of SIGIR-02, 25th ACM International Conference on Research and Devel-opment in Information Retrieval. Tampere, Finland, ACM Press, New York: 151–158. Craven, M., DiPasquo, D., Freitag, D., McCallum, A. K., Mitchell, T. M., Nigam, K., and Slattery, S. (1998). Learning to Extract Symbolic Knowledge from the World Wide Web. In Proceedings of AAAI-98, 15th Conference of the American Association for Artificial Intelligence. Madison, WI, AAAI Press, Menlo Park, CA: 509–516. Craven, M., DiPasquo, D., Freitag, D., McCallum, A. K., Mitchell, T. M., Nigam, K., and Slattery, S. (2000). “Learning to Construct Knowledge Bases from the World Wide Web.” Artificial Intelligence 118(1/2): 69–113. Craven, M., and Kumlien, J. (1999). Constructing Biological Knowledge-Bases by Extracting Information from Text Sources. In Proceedings of the 7th International Conference on Intelligent Systems for Molecular Biology (ISMB-99). Heidelberg, AAAI Press, Menlo Park, CA: 77–86. Craven, M., and Slattery, S. (2001). “Relational Learning with Statistical Predicate Invention: Better Models for Hypertext.” Machine Learning 43(1/2): 97–119. Creecy, R. M., Masand, B. M., Smith, S. J., and Waltz, D. L. (1992). “Trading MIPS and Mem-ory for Knowledge Engineering: Classifying Census Returns on the Connection Machine.” Communications of the ACM 35(8): 48–63. Cristianini, N., Shawe-Taylor, J., and Lodhi, H. (2001). Latent Semantic Kernels. In Proceedings of ICML-01, 18th International Conference on Machine Learning. Williams College, MA, Morgan Kaufmann Publishers, San Francisco: 66–73. Cristianini, N., Shawe-Taylor, J., and Lodhi, H. (2002). “Latent Semantic Kernels.” Journal of Intelligent Information Systems 18(2/3): 127–152. Cutting, C., Karger, D., and Pedersen, J. O. (1993). Constant Interaction-Time Scatter/Gather Browsing of Very Large Document Collections. In Proceedings of ACM–SIGIR Conference on Research and Development in Information Retrieval. Pittsburgh, ACM Press, New York: 126–134. Cutting, D. R., Karger, D. R., Pedersen, J. O., and Tukey, J. W. (1992). Scatter/Gather: A Cluster-Based Approach to Browsing Large Document Collections. In Proceedings of the 15th Annual International ACM–SIGIR Conference on Research and Development in Information Retrieval. Copenhagen, ACM Press, New York: 318–329. Cyram Company, Ltd. (2004). NetMiner Webpage D’Alessio, S., Murray, K., Schiaffino, R., and Kershenbaum, A. (1998). Category Levels in Hier-archical Text Categorization. In Proceedings of EMNLP-98, 3rd Conference on Empirical Methods in Natural Language Processing. Granada, Spain, Association for Computational Linguistics, Morristown, NJ. D’Alessio, S., Murray, K., Schiaffino, R., and Kershenbaum, A. (2000). The Effect of Using Hierarchical Classifiers in Text Categorization. In Proceedings of RIAO-00, 6th International Conference “Recherche d’Information Assist´ ee par Ordinateur.” Paris: 302–313. Daelemans, W., Buchholz, S., and Veenstra, J. (1999). Memory-Based Shallow Parsing. In Pro-ceedings of CoNLL. Bergen, Norway, Association for Computational Linguistics, Somerset, NJ: 53–60. Dagan, I., Feldman, R., and Hirsh, H. (1996). Keyword-Based Browsing and Analysis of Large Document Sets. In Proceedings of SDAIR-96, 5th Annual Symposium on Document Anal-ysis and Information Retrieval. Las Vegas, UNLV Publications/Reprographics, Las Vegas: 191–207. Dagan, I., Karov, Y., and Roth, D. (1997). Mistake-Driven Learning in Text Categorization. In Proceedings of EMNLP-97, 2nd Conference on Empirical Methods in Natural Language Processing. Providence, RI, Association for Computational Linguistics, Morristown, NJ: 55–63. Dagan, I., Pereira, F., and Lee, L. (1994). Similarity-Based Estimation of Word Cooccurrence Probabilities. In Proceedings. of the Annual Meeting of the Association for Computational 346 Bibliography Linguistics. Las Cruces, NM, Association for Computational Linguistics, Morristown, NJ: 272–278. Damashek, M. (1995). “Gauging Similarity with N-Grams: Language-Independent Catego-rization of Text.” Science 267(5199): 843–848. Dasigi, V., Mann, R. C., and Protopopescu, V. A. (2001). “Information Fusion for Text Classification: An Experimental Comparison.” Pattern Recognition 34(12): 2413– 2425. Davidson, G. S., Hendrickson, B., Johnson, D. K., Meyers, C. E., and Wylie, B. N. (1999). “Knowledge Mining with VxInxight: Discovery through Interaction.” Journal of Intelligent Information Systems 11(3): 259–285. Davidson, R., and Harel, D. (1996). “Drawing Graphs Nicely Using Simulated Annealing.” ACM Transactions on Graphics 15(4): 301–331. de Buenaga Rodriguez, M., Gomez Hidalgo, J. M., and Diaz-Agudo, B. (2000). Using WordNet to Complement Training Information in Text Categorization. Recent Advances in Natural Language Processing II. Amsterdam, J. Benjamins: 189. De Nooy, W., Mrvar, A., and Batagelj, V. (2004). Exploratory Social Network Analysis with Pajek. New York, Cambridge University Press. De Sitter, A., and Daelemans, W. (2003). Information Extraction via Double Classifica-tion. International Workshop on Adaptive Text Extraction and Mining. Catvat-Dubroknik, Croatia, Springer, Berlin: 66–73. Debole, F., and Sebastiani, F. (2003). Supervised Term Weighting for Automated Text Cat-egorization. In Proceedings of SAC-03, 18th ACM Symposium on Applied Computing. Melbourne, FL, ACM Press, New York: 784–788. Decker, S., Melnik, S., Harmelen, F. V., Fensel, D., Klein, M. C. A., Broekstra, J., Erdmann, M., and Horrocks, I. (2000). “The Semantic Web: The Roles of XML and RDF.” IEEE Internet Computing 4(5): 63–74. Deerwester, S., Dumais, S. T., Furnas, G. W., Landauer, T. K., and Harshman, R. (1990). “Indexing by Latent Semantic Analysis.” Journal of the American Society of Information Science 41(6): 391–407. Denoyer, L., and Gallinari, P. (2003). A Belief Networks–Based Generative Model for Struc-tured Documents. An Application to the XML Categorization. In Proceedings of MLDM-03, 3rd International Conference on Machine Learning and Data Mining in Pattern Recogni-tion. Leipzig, Springer-Verlag, Heidelberg: 328–342. Denoyer, L., Zaragoza, H., and Gallinari, P. (2001). HMM-Based Passage Models for Docu-ment Classification and Ranking. In Proceedings of ECIR-01, 23rd European Colloquium on Information Retrieval Research. Darmstadt, Germany, Springer, Berlin: 126–135. Dermatas, E., and Kokkinakis, G. (1995). “Automatic Stochastic Tagging of Natural Language Texts.” Computational Linguistics 21(2): 137–163. Dhillon, I., Mallela, S., and Kumar, R. (2002). Enhanced Word Clustering for Hierarchi-cal Text Classification. In Proceedings of KDD-02, 8th ACM International Conference on Knowledge Discovery and Data Mining. Edmonton, Canada, ACM Press, New York: 191–200. Di-Nunzio, G., and Micarelli, A. (2003). Does a New Simple Gaussian Weighting Approach Per-form Well in Text Categorization? In Proceedings of IJCAI-03, 18th International Joint Con-ference on Artificial Intelligence. Acapulco, Morgan Kaufmann Publishers, San Francisco: 581–586. Diao, Y., Lu, H., and Wu, D. (2000). A Comparative Study of Classification-Based Personal E-mail Filtering. In Proceedings of PAKDD-00, 4th Pacific-Asia Conference on Knowledge Discovery and Data Mining. Kyoto, Japan, Springer-Verlag, Heidelberg: 408–419. Dickerson, J., Berleant, D., Cox, Z., Qi, W., and Syrkin Wurtele, E. (2003). Creating and Modeling Metabolic and Regulatory Networks Using Text Mining and Fuzzy Expert Systems. In Proceedings of Computational Biology and Genome Informatics Conference. World Scientific Publishing, Hackensack, NJ: 207–238. Bibliography 347 Diederich, J., Kindermann, J., Leopold, E., and Paass, G. (2003). “Authorship Attribution with Support Vector Machines.” Applied Intelligence 19(1/2): 109–123. Ding, Y., Fensel, D., Klein, M. C. A., and Omelayenko, B. (2002). “The Semantic Web: Yet Another Hip?” DKE 41(2–3): 205–227. Dixon, M. (1997). “An Overview of Document Mining Technology.” Unpublished manuscript. Doan, A., Madhavan, J., Domingos, P., and Halevy, A. Y. (2002). “Learning to Map between Ontologies on the Semantic Web.” In Proceedings of WWW’02, 11th International Confer-ence on World Wide Web. Honolulu, ACM Press, New York: 662–673. Domingos, P. (1999). “The Role of Occam’s Razor in Knowledge Discovery.” Data Mining and Knowledge Discovery 3(1999): 409–425. Domingos, P., and Pazzani, M. (1997). “On the Optimality of the Simple Bayesian Classifier under Zero-One Loss.” Machine Learning 29: 103–130. Dorre, J., Gerstl, P ., and Seiffert, R. (1999). Text Mining: Finding Nuggets in Mountains of Textual Data. In Proceedings of KDD-99, 5th ACM International Conference on Knowledge Discovery and Data Mining. San Diego, ACM Press, New York: 398–401. Dou, D., McDermott, D., and Qi, P. (2003). Ontology Translation on the Semantic Web. In Proceedings of the International Conference on Ontologies, Databases and Applications of Semantics. Catania (Sicily), Italy, Springer, Berlin: 952–969. Doyle, L. B. (1965). “Is Automatic Classification a Reasonable Application of Statistical Anal-ysis of Text?” Journal of the ACM 12(4): 473–489. Drucker, H., Vapnik, V., and Wu, D. (1999). “Support Vector Machines for Spam Categoriza-tion.” IEEE Transactions on Neural Networks 10(5): 1048–1054. Duffet, P. L., and Vernik, R. J. (1997). Software System Visualisation: Netmap Investigations. Technical Report, DSTO-TR-0558, Defense Science and Technology Organization, Gov-ernment of Australia. Dumais, S. T., and Chen, H. (2000). Hierarchical Classification of Web Content. In Proceed-ings of SIGIR-00, 23rd ACM International Conference on Research and Development in Information Retrieval. Athens, ACM Press, New York: 256–263. Dumais, S. T., Platt, J., Heckerman, D., and Sahami, M. (1998). Inductive Learning Algorithms and Representations for Text Categorization. In Proceedings of 7th International Conference on Information and Knowledge Management. Bethesda, MD, ACM Press, New York: 148– 155. Dzbor, M., Domingue, J., and Motta, E. (2004). Magpie: Supporting Browsing and Naviga-tion on the Semantic Web. In Proceedings of International Conference on Intelligent User Interfaces (IUI04), Madeira, Funchal, Portugal, ACM Press, New York: 191–197. Eades, P. (1984). “A Heuristic for Graph Drawing.” Congressus Numerantium 44: 149– 160. El-Yaniv, R., and Souroujon, O. (2001). Iterative Double Clustering for Unsupervised and Semi-Supervised Learning. In Proceedings of ECML-01, 12th European Conference on Machine Learning. Freiburg, Germany, Springer-Verlag, Heidelberg: 121–132. Elworthy, D. (1994). Does Baum–Welch Re-estimation Help Taggers? In Proceedings of the 4th Conference on Applied Natural Language Processing. Stuttgart, Germany, Morgan Kaufmann Publishers, San Francisco: 53–58. Escudero, G., M arquez, L., and Rigau, G. (2000). Boosting Applied to Word Sense Disam-biguation. In Proceedings of ECML-00, 11th European Conference on Machine Learning. Barcelona, Springer-Verlag, Heidelberg: 129–141. Esteban, A. D., Rodriguez, M. D. B., Lopez, L. A. U., and Vega, M. G. (1998). Integrating Linguistic Resources in a Uniform Way for Text Classification Tasks. In Proceedings of LREC-98, 1st International Conference on Language Resources and Evaluation. Grenada, Spain: 1197–1204. Etemad, K., Doermann, D. S., and Chellappa, R. (1997). “Multiscale Segmentation of Unstruc-tured Document Pages Using Soft Decision Integration.” IEEE Transactions on Pattern Analysis and Machine Intelligence 19(1): 92–96. 348 Bibliography Etzioni, O., Cafarella, M., Downey, D., Kok, S., Popescu, A., Shaked, T., Soderland, S., Weld, D., and Yates, A. (2004). Web-Scale Information Extraction in KnowItAll. In Proceedings of WWW-04, 13th International World Wide Web Conference. New York, ACM Press, New York: 100–110. Etzioni, O., Cafarella, M., Downey, D., Popescu, A., T. Shaked, Soderland, S., Weld, D., and Yates, A. (2004). Methods for Domain-Independent Information Extraction from the Web: An Experimental Comparison. In Proceedings of the 19th National Conference on Artificial Intelligence. Ezawa, K., and Norton, S. (1995). Knowledge Discovery in Telecommunication Services Data Using Bayesian Network Models. In Proceedings of the First International Con-ference on Knowledge Discovery (KDD-95). Montreal, AAAI Press, Menlo Park, CA: 100–105. Fall, C. J., Torcsvari, A., Benzineb, K., and Karetka, G. (2003). “Automated Categorization in the International Patent Classification.” SIGIR Forum 37(1): 10–25. Fangmeyer, H., and Lustig, G. (1968). The EURATOM Automatic Indexing Project. In Pro-ceedings of the IFIP Congress (Booklet J). Edinburgh, North Holland Publishing Company, Amsterdam: 66–70. Fangmeyer,H.,andLustig,G.(1970).ExperimentswiththeCETISAutomatedIndexingSystem. In Proceedings of the Symposium on the Handling of Nuclear Information, International Atomic Energy Agency: 557–567. Fayyad, U., Grinstein, G., and Wierse, A., Eds. (2001). Information Visualization in Data Mining and Knowledge Discovery. San Francisco, Morgan Kaufmann Publishers. Fayyad, U., Piatetsky-Shapiro, G., and Smyth, P . (1996). “From Data Mining to Knowledge Discovery in Databases.” In Advances in Knowledge Discovery and Data Mining. U. Fayyad, G. Piatetsky-Shapiro, P. Smyth, and R. Uthuruswamy, eds. Cambridge, MA, AAAI/MIT Press: 1–36. Fayyad, U., Piatetsky-Shapiro, G., Smyth, P., and Uthuruswamy, R., eds. (1996). Advances in Knowledge Discovery and Data Mining. Cambridge, MA, AAAI/MIT Press. Fayyad, U. M., Reina, C. A., and Bradley, P. S. (1998). Initialization of Iterative Refinement Clustering Algorithms. Technical Report MSR-TR-98-38, Jet Proplusion Laboratories. Feldman, R. (1993). Probabilistic Revision of Logical Domain Theories. Ph.D. thesis, Depart-ment of Computer Science, Cornell University. Feldman, R. (1996). The KDT System – Using Prolog for KDD. In Proceedings of 4th Con-ference of Practical Applications of Prolog. London: 91–110. Feldman, R. (1998). Practical Text Mining. In Proceedings of the 2nd European Symposium on Principles of Data Mining and Knowledge Discovery. London: 478. Feldman, R. (2002). “Text Mining.” In Handbook of Data Mining and Knowledge Discovery. W. Kloesgen and J. Zytkow, eds. New York, Oxford University Press. Feldman, R., Amir, A., Aumann, Y., and Zilberstein, A. (1996). Incremental Algorithms for Association Generation. In Proceedings of the First Pacific Conference on Knowledge Dis-covery. Singapore. Feldman, R., Aumann, Y., Amir, A., Zilberstein, A., and Kloesgen, W. (1997). Maximal Associ-ation Rules: A New Tool for Keyword Co-occurrences in Document Collections. In Proceed-ings of 3rd International Conference on Knowledge Discovery and Data Mining. Newport Beach, CA, AAAI Press, Menlo Park, CA: 167–170. Feldman, R., Aumann, Y., Finkelstein-Landau, M., Hurvitz, E., Regev, Y., and Yaroshevich, A. (2002). A Comparative Study of Information Extraction Strategies. In Proceedings of the 3rd International Conference on Intelligent Text Processing and Computational Linguistics. Mexico City, Springer, New York: 349–359. Feldman, R., Aumann, Y., Zilberstein, A., and Ben-Yehuda, Y. (1997). Trend Graphs: Visual-izing the Evolution of Concept Relationships in Large Document Collections. In Proceedings of the 2nd European Symposium of Principles of Data Mining and Knowledge Discovery. Nantes, France, Springer, Berlin: 38–46. Bibliography 349 Feldman, R., and Dagan, I. (1995). Knowledge Discovery in Textual Databases (KDT). In Proceedings of the 1st International Conference on Knowledge Discovery and Data Mining. Montreal, Canada, AAAI Press, Menlo Park, CA: 112–117. Feldman, R., Dagan, I., and Hirsh, H. (1998). “Mining Text Using Keyword Distributions.” Journal of Intelligent Information Systems 10(3): 281–300. Feldman, R., Dagan, I., and Kloesgen, W. (1996a). Efficient Algorithms for Mining and Manip-ulating Associations in Texts. In Proceedings of the 13th European Meeting on Cybernetics and Systems Research. Vienna, Austria: 949–954. Feldman, R., Dagan, I., and Kloesgen, W. (1996b). KDD Tools for Mining Associations in Textual Databases. In Proceedings of the 9th International Symposium on Methodologies for Intelligent Systems. Zakopane, Poland: 96–107. Feldman, R., Fresko, M., Hirsh, H., Aumann, Y., Liphstat, O., Schler, Y., and Rajman, M. (1998). Knowledge Management: A Text Mining Approach. In Proceedings of the 2nd Inter-national Conference on Practical Aspects of Knowledge Management (PAKM98). Basel, Switzerland. Feldman, R., Fresko, M., Kinar, Y., Lindell, Y., Liphstar, O., Rajman, M., Schler, Y., and Zamir, O. (1998). Text Mining at the Term Level. In Proceedings of the 2nd European Symposium on Principles of Data Mining and Knowledge Discovery. Nantes, France, Springer, Berlin: 65–73. Feldman, R., and Hirsh, H. (1996a). “Exploiting Background Information in Knowledge Dis-covery from Text.” Journal of Intelligent Information Systems 9(1): 83–97. Feldman, R., and Hirsh, H. (1996b). Mining Associations in Text in the Presence of Background Knowledge. In Proceedings of the 2nd International Conference on Knowledge Discovery from Databases. Portland, OR, AAAI Press, Menlo Park, CA: 343–346. Feldman, R., and Hirsh, H. (1997). “Finding Associations in Collections of Text.” In Machine Learning and Data Mining: Methods and Applications. R. S. Michalski, I. Bratko, and M. Kubat, eds. New York, John Wiley and Sons: 223–240. Feldman, R., Kloesgen, W., Ben-Yehuda, Y., Kedar, G., and Reznikov, V. (1997). Pattern Based Browsing in Document Collections. In Proceedings of the 1st European Symposium of Principles of Data Mining and Knowledge Discovery. Trondheim, Norway, Springer, Berlin: 112–122. Feldman, R., Kloesgen, W., and Zilberstein, A. (1997a). Document Explorer: Discovering Knowledge in Document Collections. In Proceedings of the 10th International Symposium on Methodologies for International Systems. Trondheim, Norway, Springer, Berlin: 137–146. Feldman, R., Kloesgen, W., and Zilberstein, A. (1997b). Visualization Techniques to Explore Data Mining Results for Document Collections. In Proceedings of the 3rd International Conference on Knowledge Discovery and Data Mining. Newport Beach, CA, AAAI Press, Menlo Park, CA: 16–23. Feldman, R., Regev, Y., Hurvitz, E., and Landau-Finkelstein, M. (2003). “Mining the Biomed-ical Literature Using Semantic Analysis and Natural Language Processing Techniques.” Biosilico 1 (2): 69–72. Fellbaum, C. D., ed. (1998). WordNet: An Electronic Lexical Database. Cambridge, MA, MIT Press. Fensel, D., Angele, J., Decker, S., Erdmann, M., Schnurr, H.-P ., Staab, S., Studer, R., and Witt, A. (1999). “On2broker: Semantic-Based Access to Information Sources at the WWW.” WebNet 1: 366–371. Ferilli, S., Fanizzi, N., and Semeraro, G. (2001). Learning Logic Models for Automated Text Categorization. In Proceedings of AIIA-01, 7th Congress of the Italian Association for Artificial Intelligence. F. Esposito, ed. Bari, Italy, Springer-Verlag, Heidelberg: 81–86. Ferrndez, A., Palomar, M., and Moreno, L. (1998). Anaphor Resolution in Unrestricted Texts with Partial Parsing. In Proceedings of the 36th Annual Meeting of the Association for Computational Linguistics. Montreal, Morgan Kaufmann Publishers, San Francisco: 385– 391. 350 Bibliography Field, B. J. (1975). “Towards Automatic Indexing: Automatic Assignment of Controlled-Language Indexing and Classification from Free Indexing.” Journal of Documentation 31(4): 246–265. Finch, S. (1994). Exploiting Sophisticated Representations for Document Retrieval. In Proceed-ings of the 4th Conference on Applied Natural Language Processing. Stuttgart, Germany, Morgan Kaufmann Publishers, San Francisco: 65–71. Finn, A., Kushmerick, N., and Smyth, B. (2002). Genre Classification and Domain Transfer for Information Filtering. In Proceedings of ECIR-02, 24th European Colloquium on Informa-tion Retrieval Research. Glasgow, Springer-Verlag, Heidelberg: 353–362. Fisher, D., Soderland, S., McCarthy, J., Feng, F., and Lehnert, W. (1995). Description of the UMass System as Used for MUC-6. In Proceedings of the 6th Message Understand-ing Conference (MUC-6). Columbia, MD, Morgan Kaufmann Publishers, San Francisco: 127–140. Fisher, M., and Everson, R. (2003). When Are Links Useful? Experiments in Text Classifica-tion. In Proceedings of ECIR-03, 25th European Conference on Information Retrieval. F. Sebastiani, ed. Pisa, Italy, Springer-Verlag, Berlin: 41–56. Forsyth, R. S. (1999). “New Directions in Text Categorization.” In Causal Models and Intelli-gent Data Management. A. Gammerman, ed. Heidelberg, Springer-Verlag: 151–185. Frank, E., Chui, C., and Witten, I. H. (2000). Text Categorization Using Compression Models. In Proceedings of DCC-00, IEEE Data Compression Conference. Snowbird, UT, IEEE Computer Society Press, Los Alamitos, CA: 200–209. Frank, E., Paynter, G. W., Witten, I. H., Gutwin, C., and Neville-Manning, C. G. (1999). Domain-Specific Keyphrase Extraction. In Proceedings of the 16th International Joint Con-ference on Artificial Intelligence. Stockholm, Morgan Kaufmann Publishers, San Francisco: 668–673. Frasconi, P., Soda, G., and Vullo, A. (2001). Text Categorization for Multi-page Documents: A Hybrid Naive Bayes HMM Approach. In Proceedings of JCDL, 1st ACM-IEEE Joint Con-ference on Digital Libraries. Roanoke, VA, IEEE Computer Society Press, Los Alamitos, CA: 11–20. Frasconi, P., Soda, G., and Vullo, A. (2002). “Text Categorization for Multi-page Documents: A Hybrid Naive Bayes HMM Approach.” Journal of Intelligent Information Systems 18(2/3): 195–217. Frawley, W. J., Piatetsky-Shapiro, G., and Matheus, C. J. (1991). “Knowledge Discovery in Databases: An Overview.” In Knowledge Discovery in Databases. G. Piatetsky-Shapiro and W. J. Frawley, eds. Cambridge, MA, MIT Press: 1–27. Freeman, L. C. (1977). “A Set of Measures of Centrality Based on Betweenness.” Sociometry 40: 35–41. Freeman, L. C. (1979). “Centrality in Social Networks: Conceptual Clarification.” Social Net-works 1: 215–239. Freitag, D. (1997). Using Grammatical Inference to Improve Precision in Information Extrac-tion. In Proceedings of the Workshop on Grammatical Inference, Automata Induction, and Language Acquisition (ICML ’97). Nashville, TN, Morgan Kaufmann Publishers, San Mateo, CA. Freitag, D. (1998a). Information Extraction from HTML: Application of a General Machine Learning Approach. In Proceedings of the 15th National Conference on Artificial Intelli-gence. Madison, WI, AAAI Press, Menlo Park, CA: 517–523. Freitag, D. (1998b). Machine Learning for Information Extraction in Informal Domains. Ph.D. thesis, Computer Science Department, Carnegie Mellon University. Freitag, D., and Kushmerick, N. (2000). Boosted Wrapper Induction. In Proceedings of AAAI 2000. Austin, TX, AAAI Press, Menlo Park, CA: 577–583. Freitag, D., and McCallum, A. (2000). Information Extraction with HMM Structures Learned by Stochastic Optimization. In Proceedings of the 17th National Conference on Artificial Intelligence. Austin, TX, AAAI Press, Menlo Park, CA: 584–589. Bibliography 351 Freitag, D., and McCallum, A. L. (1999). Information Extraction with HMMs and Shrinkage. In Papers from the AAAI-99 Workshop on Machine Learning for Information Extraction: 31–36. Freund, J., and Walpole, R. (1990). Estad´ ıstica Matem´ atica con Aplicaciones. Prentice Hall. Frommholz,I.(2001).CategorizingWebDocumentsinHierarchicalCatalogues.InProceedings of ECIR-01, 23rd European Colloquium on Information Retrieval Research. Darmstadt, Germany: 18–20. Fruchterman, T., and Reingold, E. (1991). “Graph Drawing by Force-Directed Placement.” Software – Practice and Experience 21(11): 1129–1164. Fuhr, N. (1985). A Probabilistic Model of Dictionary-Based Automatic Indexing. In Proceed-ings of RIAO-85, 1st International Conference “Recherche d’Information Assistee par Ordinateur.” Grenoble, France: 207–216. Fuhr, N., Hartmann, S., Knorz, G., Lustig, G., Schwantner, M., and Tzeras, K. (1991). AIR/X – A Rule-Based Multistage Indexing System for Large Subject Fields. In Proceedings of RIAO-91, 3rd International Conference “Recherche d’Information Assist´ ee par Ordinateur.” A. Lichnerowicz, ed. Barcelona, Elsevier Science Publishers, Amsterdam: 606–623. Fuhr, N., and Knorz, G. (1984). Retrieval Test Evaluation of a Rule-Based Automated Indexing (AIR/PHYS). In Proceedings of SIGIR-84, 7th ACM International Conference on Research and Development in Information Retrieval. C. J. v. Rijsbergen, ed. Cambridge, UK, Cam-bridge University Press, Cambridge: 391–408. Fuhr, N., and Pfeifer, U. (1991). Combining Model-Oriented and Description-Oriented Approaches for Probabilistic Indexing. In Proceedings of SIGIR-91, 14th ACM Interna-tional Conference on Research and Development in Information Retrieval. Chicago, ACM Press, New York: 46–56. Fuhr, N., and Pfeifer, U. (1994). “Probabilistic Information Retrieval as Combination of Abstraction Inductive Learning and Probabilistic Assumptions.” ACM Transactions on Information Systems 12(1): 92–115. Fukuda, K., Tamura, A., Tsunoda, T., and Takagi, T. (1998). Toward Information Extraction: IdentifyingProteinNames.InProceedingsofthePacificSymposiumonBiocumputing.Maui, Hawaii, World Scientific Publishing Company, Hackensack, NJ: 707–718. Fung, G. P. C., Yu, J. X., and Lu, H. (2002). Discriminative Category Matching: Efficient Text Classification for Huge Document Collections. In Proceedings of ICDM-02, 2nd IEEE Inter-national Conference on Data Mining. Maebashi City, Japan, IEEE Computer Society Press, Los Alamitos, 187–194. Furnas, G. (1981). “The FISHEYE View: A New Look at Structured Files.” Bell Laborato-ries Technical Report, reproduced in Reading in Information Visualization: Using Vision to Think. S. K. Card, J. D. Mackinlay, and B. Schneiderman, eds. San Francisco, Morgan Kaufmann Publishers: 312–330. Furnas, G. (1986). Generalized Fisheye Views. In Proceedings of the ACM SIGCHI Conference on Human Factors in Computing Systems. ACM Press, New York: 16–23. Furnkranz, J. (1999). Exploiting Structural Information for Text Classification on the WWW. In Proceedings of IDA-99, 3rd Symposium on Intelligent Data Analysis. Amsterdam, Springer-Verlag, Heidelberg: 487–497. Furnkranz, J. (2002). “Hyperlink Ensembles: A Case Study in Hypertext Classification.” Infor-mation Fusion 3(4): 299–312. Gaizauskas, R., and Humphreys, K. (1997). “Using a Semantic Network for Information Extraction.” Natural Language Engineering 3(2): 147–196. Galavotti, L., Sebastiani, F., and Simi, M. (2000). Experiments on the Use of Feature Selection and Negative Evidence in Automated Text Categorization. In Proceedings of ECDL-00, 4th European Conference on Research and Advanced Technology for Digital Libraries. Lisbon, Springer-Verlag, Heidelberg: 59–68. Gale, W. A., Church, K. W., and Yarowsky, D. (1993). “A Method for Disambiguating Word Senses in a Large Corpus.” Computers and the Humanities 26(5): 415–439. 352 Bibliography Gall, H., Jazayeri, M., and Riva, C. (1999). Visualizing Software Release Histories: The Use of Color and Third Dimension. In Proceedings of the International Conference on Software Maintenance (ICSM ’99). Oxford, UK, IEEE Computer Society Press, Los Alamitos, CA: 99. Gansner, E., Koutsofias, E., North, S., and Vo, K. (1993). “A Technique for Drawing Directed Graphs.” IEEE Transactions on Software Engineering 19(3): 214–230. Gansner, E., North, S., and Vo, K. (1988). “DAG – A Program that Draws Directed Graphs.” Software Practice and Experience 18(11): 1047–1062. Ganter, B., and Wille, R. (1999). Formal Concept Analysis: Mathematical Foundations. Berlin, Springer-Verlag. Gao, S., Wu, W., Lee, C.-H., and Chua, T.-S. (2003). A Maximal Figure-of-Merit Learning Approach to Text Categorization. In Proceedings of SIGIR-03, 26th ACM International Conference on Research and Development in Information Retrieval. Toronto, ACM Press, New York: 174–181. Gaussier, E., Goutte, C., Popat, K., and Chen, F. (2002). A Hierarchical Model for Clustering and Categorising Documents. In Proceedings of ECIR-02, 24th European Colloquium on Information Retrieval Research. Glasgow, Springer-Verlag, Heidelberg: 229–247. Gelbukh, A., ed. (2002). Computational Linguistics and Intelligent Text Processing. In Proceed-ings of 3rd International Conference, CICLing 2001. Mexico City, Springer-Verlag, Berlin and New York. The Gene Ontology (GO) Consortium. (2000). “Gene Ontology: Tool for the Unification of Biology.” Nature Genetics 25: 25–29. The Gene Ontology (GO) Consortium. (2001). “Creating the Gene Ontology Resource: Design and Implementation.” Genome Research 11: 1425–1433. Gentili, G. L., Marinilli, M., Micarelli, A., and Sciarrone, F. (2001). “Text Categorization in an Intelligent Agent for Filtering Information on the Web.” International Journal of Pattern Recognition and Artificial Intelligence 15(3): 527–549. Geutner, P., Bodenhausen, U., and Waibel, A. (1993). Flexibility through Incremental Learning: Neural Networks for Text Categorization. In Proceedings of WCNN-93, World Congress on Neural Networks. Portland, OR, Lawrence Erlbaum Associates, Hillsdale, NJ: 24–27. Ghani, R. (2000). Using Error-Correcting Codes for Text Classification. In Proceedings of ICML-00, 17th International Conference on Machine Learning. P . Langley, ed. Stanford, CA, Morgan Kaufmann Publishers, San Francisco: 303–310. Ghani, R. (2001). Combining Labeled and Unlabeled Data for Text Classification with a Large Number of Categories. In Proceedings of the IEEE International Conference on Data Min-ing. San Jose, CA, IEEE Computer Society Press, Los Alamitos, CA: 597–598. Ghani, R. (2002). Combining Labeled and Unlabeled Data for MultiClass Text Categorization. In Proceedings of ICML-02, 19th International Conference on Machine Learning. Sydney, Australia, Morgan Kaufmann Publishers, San Francisco: 187–194. Ghani, R., Slattery, S., and Yang, Y. (2001). Hypertext Categorization Using Hyperlink Patterns and Meta Data. In Proceedings of ICML-01, 18th International Conference on Machine Learning. Williams College, Morgan Kaufmann Publishers, San Francisco: 178–185. Giorgetti, D., and Sebastiani, F. (2003a). “Automating Survey Coding by Multiclass Text Cat-egorization Techniques.” Journal of the American Society for Information Science and Tech-nology 54(12): 1269–1277. Giorgetti, D., and Sebastiani, F. (2003b). Multiclass Text Categorization for Automated Survey Coding. In Proceedings of SAC-03, 18th ACM Symposium on Applied Computing. Melbourne, Australia, ACM Press, New York: 798–802. Giorgio, M. D. N., and Micarelli, A. (2003). Does a New Simple Gaussian Weighting Approach Perform Well in Text Categorization? In Proceedings of IJCAI-03, 18th International Joint Conference on Artificial Intelligence. Acapulco, Morgan Kaufmann Publishers, San Fran-cisco: 581–586. Glover, E. J., Tsioutsiouliklis, K., Lawrence, S., Pennock, D. M., and Flake, G. W. (2002). Using Web Structure for Classifying and Describing Web Pages. In Proceedings of WWW-02, Bibliography 353 International Conference on the World Wide Web. Honolulu, ACM Press, New York: 562– 569. Goldberg, J. L. (1995). CDM: An Approach to Learning in Text Categorization. In Proceedings of ICTAI-95, 7th International Conference on Tools with Artificial Intelligence. Herndon, VA, IEEE Computer Society Press, Los Alamitos, CA: 258–265. Goldberg,J.L.(1996).“CDM:AnApproachtoLearninginTextCategorization.”International Journal on Artificial Intelligence Tools 5(1/2): 229–253. Goldstein,J.,andRoth,S.(1994).UsingAggregationandDynamicQueriesforExploringLarge Data Sets. In Proceedings of Human Factors in Computing Systems CHI ’94 Conference. Boston, ACM, New York: 23–29. Goldszmidt, M., and Sahami, M. (1998). Probabilistic Approach to Full-Text Document Clus-tering. Technical Report ITAD-433-MS-98-044, SRI International. Gomez-Hidalgo, J. M. (2002). Evaluating Cost-Sensitive Unsolicited Bulk Email Categoriza-tion. In Proceedings of SAC-02, 17th ACM Symposium on Applied Computing. Madrid, ACM Press, New York: 615–620. Gomez-Hidalgo, J. M., Rodriguez, J. M. D. B., Lopez, L. A. U., Valdivia, M. T. M., and Vega, M. G. (2002). Integrating Lexical Knowledge in Learning-Based Text Categorization. In Proceedings of JADT-02, 6th International Conference on the Statistical Analysis of Textual Data. St.-Malo, France. Goodman, M. (1990). Prism: A Case-Based Telex Classifier. In Proceedings of IAAI-90, 2nd Conference on Innovative Applications of Artificial Intelligence. Boston, AAAI Press, Menlo Park, CA: 25–37. Gotlieb, C. C., and Kumar, S. (1968). “Semantic Clustering of Index Terms.” Journal of the ACM 15(4): 493–513. Govert, N., Lalmas, M., and Fuhr, N. (1999). A Probabilistic Description-Oriented Approach for Categorising Web Documents. In Proceedings of CIKM-99, 8th ACM International Con-ference on Information and Knowledge Management. Kansas City, MO, ACM Press, New York: 475–482. Graham, M. (2001). Visualising Multiple Overlapping Classification Hierarchies. Ph.D. diss., Napier University. Gray, W. A., and Harley, A. J. (1971). “Computer-Assisted Indexing.” Information Storage and Retrieval 7(4): 167–174. Greene, B. B., and Rubin, G. M. (1971). Automatic Grammatical Tagging of English. Technical Report. Providence, RI, Brown University. Grieser, G., Jantke, K. P., Lange, S., and Thomas, B. (2000). A Unifying Approach to HTML Wrapper Representation and Learning. Discovery Science. In Proceedings of 3rd Interna-tional Conference, DS 2000. Kyoto, Japan, Springer-Verlag, Berlin: 50–64. Grinstein, G. (1996). Harnessing the Human in Knowledge Discovery. In Proceedings of the 2nd ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. Portland, OR, AAAI Press, CA: 384–385. Grishman, R. (1996). “The Role of Syntax in Information Extraction.” In Advances in Text Processing: Tipster Program Phase II. San Francisco, Morgan Kaufmann Publishers. Grishman, R. (1997). “Information Extraction: Techniques and Challenges.” In Materials of Information Extraction International Summar School – SCIE ’97. Springer, Berlin: 10–27. Gruber, T. R. (1993). “A Translation Approach to Portable Ontologies.” Knowledge Acquisi-tion 5: 199–220. Guthrie, L., Guthrie, J. A., and Leistensnider, J. (1999). “Document Classification and Rout-ing.” In Natural Language Information Retrieval. T. Strzalkowski, ed. Dordrecht, Kluwer Academic Publishers: 289–310. Guthrie, L., Walker, E., and Guthrie, J. A. (1994). Document Classification by Machine: Theory and Practice. In Proceedings of COLING-94, 15th International Conference on Computa-tional Linguistics. Kyoto, Japan, Morgan Kaufmann Publishers, San Francisco: 1059–1063. Hadany, R., and Harel, D. (2001). “A Multi-Scale Method for Drawing Graphs Nicely.” Dis-crete Applied Mathematics 113: 3–21. 354 Bibliography Hadjarian, A., Bala, J., and Pachowicz, P. (2001). Text Categorization through Multistrategy Learning and Visualization. In Proceedings of CICLING-01, 2nd International Conference on Computational Linguistics and Intelligent Text Processing. A. Gelbukh, ed. Mexico City, Springer-Verlag, Heidelberg: 423–436. Hahn,U.,andSchnattinger,K.(1997).KnowledgeMiningfromTextualSources.InProceedings of the 6th International Conference on Information and Knowledge Management. Las Vegas, ACM, New York: 83–90. Hamill, K. A., and Zamora, A. (1978). An Automatic Document Classification System Using Pattern Recognition Techniques. In Proceedings of ASIS-78, 41st Annual Meeting of the American Society for Information Science. E. H. Brenner, ed. New York, American Society for Information Science, Washington, DC: 152–155. Hamill, K. A., and Zamora, A. (1980). “The Use of Titles for Automatic Document Classifi-cation.” Journal of the American Society for Information Science 33(6): 396–402. Han, E.-H., Karypis, G., and Kumar, V. (2001). Text Categorization Using Weight-Adjusted k-Nearest Neighbor Classification. In Proceedings of PAKDD-01, 5th Pacific-Asia Confer-enece on Knowledge Discovery and Data Mining. Hong Kong, Springer-Verlag, Heidelberg: 53–65. Han, J., and Fu, Y. (1995). Discovery of Multiple-Level Association Rules from Large Databases. In Proceedings of the 1995 International Conference on Very Large Data Bases (VLDB’95). Zurich, Morgan Kaufmann Publishers, San Francisco: 420–431. Hanauer, D. (1996). “Integration of Phonetic and Graphic Features in Poetic Text Categoriza-tion Judgements.” Poetics 23(5): 363–380. Hao, M., Dayal, U., Hsu, M., Sprenger, T., and Gross, M. (2001). Visualization of Directed Associations in E-commerce Transaction Data. In Proceedings of Data Visualization (EG and IEEE’s VisSym ’01). Ascona, Switzerland, Springer, Berlin: 185–192. Haralick, R. M. (1994). Document Image Understanding: Geometric and Logical Layout. In Proceedings of CVPR94, IEEE Computer Society Conference on Computer Vision and Pattern Recognition. Seattle, IEEE Computer Society Press, Los Alamitos, CA: 385–390. Hardt, D., and Romero, M. (2002). Ellipsis and the Structure of Discourse. In Proceedings of Sinn und Bedeutung VI, Osnabr¨ uck, Germany, Institute for Cognitive Science, University of Osnabr¨ uck: 85–98. Harel, D., and Koren, Y. (2000). A Fast Multi-Scale Method for Drawing Large Graphs. In Proceedings of the 8th International Symposium on Graph Drawing. Willamsburg, VA, Springer-Verlag, Heidelberg: 282–285. Hatzivassiloglou, V., Duboue, P. A., and Rzhetsky, A. (2001). “Disambiguating Proteins, Genes, and RNA in Text: A Machine Learning Approach.” Bioinformatics 17(Suppl 1): S97–106. Havre, S., Hetzler, B., and Nowell, L. (1999). ThemeRiver(TM): In Search of Trends, Pat-terns and Relationships. In Proceedings of IEEE Symposium on Information Visualization (InfoVis 1999). San Francisco, IEEE Press, New York: 115–123. Hayes, P. (1992). “Intelligent High-Volume Processing Using Shallow, Domain-Specific Tech-niques.” In Text-Based Intelligent Systems: Current Research and Practice in Information Extraction and Retrieval. P. S. Jacobs, ed. Hillsdale, NJ, Lawrence Earlbaum: 227–242. Hayes, P. J., Andersen, P. M., Nirenburg, I. B., and Schmandt, L. M. (1990). Tcs: A Shell for Content-Based Text Categorization. In Proceedings of CAIA-90, 6th IEEE Conference on Artificial Intelligence Applications. Santa Barbara, CA, IEEE Computer Society Press, Los Alamitos, CA: 320–326. Hayes, P . J., Knecht, L. E., and Cellio, M. J. (1988). A News Story Categorization System. In Proceedings of ANLP-88, 2nd Conference on Applied Natural Language Processing. Austin, JX, Association for Computational Linguistics, Morristown, NJ: 9–17. Hayes, P. J., and Weinstein, S. P. (1990). Construe/Tis: A System for Content-Based Indexing of a Database of News Stories. In Proceedings of IAAI-90, 2nd Conference on Innovative Applications of Artificial Intelligence. Boston, AAAI Press, Menlo Park, CA: 49–66. Bibliography 355 He, J., Tan, A.-H., and Tan, C.-L. (2003). “On Machine Learning Methods for Chinese Docu-ment Categorization.” Applied Intelligence 18(3): 311–322. Heaps, H. S. (1973). “A Theory of Relevance for Automatic Document Classification.” Infor-mation and Control 22(3): 268–278. Hearst, M. (1992). Automatic Acquisition of Hyponyms From Large Text Corpora. In Proceed-ings of the 14th International Conference on Computational Linguistics. Nantes, France, Association for Computational Linguistics, Morristown, NJ: 539–545. Hearst, M. (1995). TileBars: Visualization of Term Distribution Information in Full-Text Infor-mation Access. In Proceedings of the ACM SIGCHI Conference on Human Factors in Computing Systems, Denver, CO, ACM, New York: 59–66. Hearst, M. (1999a). Untangling Text Mining. In Proceedings of the 37th Annual Meeting of the Association of Computational Linguistics. College Park, MD, Association of Computational Linguistics, Morristown, NJ: 3–10. Hearst, M. (1999b). “User Interfaces and Visualization.” In Modern Information Retrieval. R. Baeza-Yates and B. Ribeira-Neto, eds. Boston, Addison-Wesley Longman Publishing Company: 257–323. Hearst, M. (2003). Information Visualization: Principles, Promise and Pragmatics Tutorial Notes. In Proceedings of CHI 03. Fort Lauderdale, FL. Hearst, M., and Hirsh, H. (1996). Machine Learning in Information Access. Papers from the 1996 AAAI Spring Symposium. Stanford, CA, AAAI Press, Menlo Park, CA. Hearst, M., and Karadi, C. (1997). Cat-a-Cone: An Interactive Interface for Specifying Searches and Viewing Retrieval Results Using a Large Category Hierarchy. In Proceedings of the 20th Annual International ACM/SIGIR Conference. Philadelphia, ACM Press, New York: 246– 255. Hearst, M. A. (1991). Noun Homograph Disambiguation Using Local Context in Large Cor-pora. In Proceedings of the 7th Annual Conference of the University of Waterloo Centre for the New Oxford English Dictionary. Oxford, UK: 1–22. Hearst, M. A., Karger, D. R., and Pedersen, J. O. (1995). Scatter/Gather as a Tool for the Navigation of Retrieval Results. Working Notes, AAAI Fall Symposium on AI Applications in Knowledge Navigation. Cambridge, MA, AAAI Press, Menlo Park, CA: 65–71. Hearst, M. A., and Pedersen, J. O. (1996). Reexamining the Cluster Hypothesis: Scatter/Gather on Retrieval Results. In Proceedings of ACM SIGIR ’96. Zurich, ACM Press, New York: 76–84. Hersh, W., Buckley, C., Leone, T. J., and Hickman, D. (1994). OHSUMED: An Interactive Retrieval Evaluation and New Large Text Collection for Research. In Proceedings of SIGIR-94, 17th ACM International Conference on Research and Development in Information Retrieval. Dublin, Springer-Verlag, Heidelberg: 192–201. Hetzler, B., Harris, W. M., Havre, S., and Whitney, P. (1998). Visualizing the Full Spectrum of Document Relationships. In Proceedings of the 5th International Society for Knowledge Organization (ISKO) Conference. Lille, France, Ergon-Verlog, W¨ urzburg, Germany: 168– 175. Hetzler, B., Whitney, P ., Martucci, L., and Thomas, J. (1998). Multi-Faceted Insight through Interoperable Visual Information Analysis Paradigms. In Proceedings of Information Visu-alization ’98. Research Triangle Park, NC, IEEE Computer Society Press, Los Alamitos, CA: 137–144. Hill, D. P., Blake, J. A., Richardson, J. E., and Ringwald, M. (2002). “Extension and Integration of the Gene Ontology (GO): Combining GO Vocabularies with External Vocabularies.” Genome Research 12: 1982–1991. Hindle, D. (1989). Acquiring Disambiguation Rules from Text. In Proceedings of 27th Annual Meeting of the Association for Computational Linguistics. Vancouver, Association for Com-putational Linguistics, Morristown, NJ: 118–125. Hirschman, L., Park, J. C., Tsujii, J., Wong, L., and Wu, C. H. (2002). “Accomplishments and Challenges in Literature Data Mining for Biology.” Bioinformatics Review 18(12): 1553– 1551. 356 Bibliography Hoashi, K., Matsumoto, K., Inoue, N., and Hashimoto, K. (2000). Document Filtering Methods Using Non-Relevant Information Profile. In Proceedings of SIGIR-00, 23rd ACM Interna-tional Conference on Research and Development in Information Retrieval. Athens, ACM Press, New York: 176–183. Hobbs, J. (1986). “Resolving Pronoun References.” In Readings in Natural Language Pro-cessing. B. J. Grosz, K. S. Jones and B. L. Webber, eds. Los Altos, CA, Morgan Kaufmann Publishers: 339–352. Hobbs, J., Douglas, R., Appelt, E., Bear, J., Israel, D., Kameyama, M., Stickel, M., and Tyson, M. (1996). “FASTUS: A Cascaded Finite-State Transducer for Extracting Information from Natural-Language Text.” In Finite State Devices for Natural Language Processing. E. Roche, and Y. Schabes, eds. Cambridge, MA, MIT Press: 383–406. Hobbs, J. R. (1993). FASTUS: A System for Extracting Information from Text. In Proceedings of DARPA Workshop on Human Language Technology. Princeton, NJ, Morgan Kaufmann Publishers, San Mateo, CA: 133–137. Hobbs, J. R., Appelt, D. E., Bear, J., Tyson, M., and Magerman, D. (1991). The TACITUS System: The MUC-3 Experience. Menlo Park, CA, SRI. Hoch, R. (1994). Using IR Techniques for Text Classification in Document Analysis. In Pro-ceedings of SIGIR-94, 17th ACM International Conference on Research and Development in Information Retrieval. Dublin, Springer-Verlag, Heidelberg: 31–40. Honkela, T. (1997). Self-Organizing Maps in Natural Language Processing. Neural Networks Research Centre. Helsinki, Helsinki University of Technology. Honkela, T., Kaski, S., Kohonen, T., and Lagus, K. (1998). “Self-Organizing Maps of Very Large Document Collections: Justification for the WEBSOM Method.” In Classification, Data Analysis and Data Highways. I. Balderjahn, R. Mathar and M. Schader, eds. Berlin, Springer-Verlag: 245–252. Honkela, T., Kaski, S., Lagus, K., and Kohonen, T. (1997). WEBSOM – Self-Organizing Maps ofDocumentCollections.InProceedingsofWSOM’97,WorkshoponSelf-OrganizingMaps. Espoo, Finland, Helsinki University of Technology. Helsinki: 310–315. Honkela, T., Lagus, K., and Kaski, S. (1998). “Self-Organizing Maps of Large Document Collections.” In Visual Explorations in Finance with Self-Organizing Maps. G. Deboeck and T. Kohonen, eds. London, Springer: 168–178. Hornbaek, K., Bederson, B., and Plaisant, C. (2002). “Navigation Patterns and Usability of Zoomable User Interfaces With and Without an Overview.” ACM Transactions on Computer–Human Interaction 9(4): 362–389. Hotho, A., Maedche, A., Staab, S., and Zacharias, V. (2002). “On Knowledgeable Super-vised Text Mining.” In Text Mining: Theoretical Aspects and Applications. J. Franke, G. Nakhaeizadeh, and I. Renz, eds. Heidelberg, Physica-Verlag (Springer): 131– 152. Hotho, A., Staab, S., and Maedche, A. (2001). Ontology-Based Text Clustering. In Proceedings of the IJCAI-2001 Workshop Text Learning: Beyond Supervision. Seattle. Hotho, A., Staab, S., and Stumme, G. (2003). Text Clustering Based on Background Knowledge. Institute of Applied Informatics and Formal Descriptive Methods, University of Karlsruhe, Germany: 1–35. Hoyle, W. G. (1973). “Automatic Indexing and Generation of Classification by Algorithm.” Information Storage and Retrieval 9(4): 233–242. Hsu, W.-L., and Lang, S.-D. (1999). Classification Algorithms for NETNEWS Articles. In Pro-ceedings of CIKM-99, 8th ACM International Conference on Information and Knowledge Management. Kansas City, MO, ACM Press, New York: 114–121. Hsu, W.-L., and Lang, S.-D. (1999). Feature Reduction and Database Maintenance in NET-NEWS Classification. In Proceedings of IDEAS-99, 1999 International Database Engineer-ing and Applications Symposium. Montreal, IEEE Computer Society Press, Los Alamitos, CA: 137–144. Huang, S., Ward, M., and Rudensteiner, E. (2003). Exploration of Dimensionality Reduction for Text Visualization. Worcester, MA, Worcester Polytechnic Institute. Bibliography 357 Hubona, G. S., Shirah, G., and Fout, D. (1997). “The Effects of Motion and Stereopsis on Three-Dimensional Visualization.” International Journal of Human Computer Studies 47(5): 609–627. Huffman, S. (1995). Acquaintance: Language-Independent Document Categorization by N-Grams. In Proceedings of TREC-4, 4th Text Retrieval Conference. Gaithersburg, MD, National Institute of Standards and Technology, Gaithersburg, MD: 359–371. Huffman, S., and Damashek, M. (1994). Acquaintance: A Novel Vector-Space N-Gram Technique for Document Categorization. In Proceedings of TREC-3, 3rd Text Retrieval Conference, D. K. Harman, ed. Gaithersburg, MD, National Institute of Standards and Technology, Gaithersburg, MD: 305–310. Huffman,S.B.(1995).“LearningInformationExtractionPatternsfromExamples.”InConnec-tionist, Statistical, and Symbolic Approaches to Learning for Natural Language Processing. S. Wermter, E. Riloff, and G. Scheler, eds. London, Springer-Verlag: 246–260. Hull, D. (1996). “Stemming Algorithms – A Case Study for Detailed Evaluation.” Journal of the American Society for Information Science 47(1): 70–84. Hull, D. A. (1994). Improving Text Retrieval for the Routing Problem Using Latent Semantic Indexing. In Proceedings of SIGIR-94, 17th ACM International Conference on Research and Development in Information Retrieval. Dublin, Springer-Verlag, Heidelberg: 282–289. Hull, D. A. (1998). The TREC-7 Filtering Track: Description and Analysis. In Proceedings of TREC-7, 7th Text Retrieval Conference. Gaithersburg, MD, National Institute of Standards and Technology, Gaithersburg, MD: 33–56. Hull, D. A., Pedersen, J. O., and Schutze, H. (1996). Method Combination for Document Filtering.InProceedingsofSIGIR-96,19thACMInternationalConferenceonResearchand Development in Information Retrieval. H.-P. Frei, D. Harman, P . Schable, and R. Wilkinson, eds. Zurich, ACM Press, New York: 279–288. Hummon, M. P., and Carley, K. (1993). “Social Networks as Normal Science.” Social Networks 14: 71–106. Humphreys, K., Gaizauskas, R., and Azzam, S. (1997). Event Coreference for Information Extraction. In Proceedings of the Workshop on Operational Factors in Practical, Robust, AnaphoraResolutionforUnrestrictedTexts.Madrid,Spain,AssociationforComputational Linguistics, Morristown, NJ: 75–81. Igarashi, T., and Hinckley, K. (2000). Speed-Dependent Automatic Zooming for Browsing Large Documents. In Proceedings of the 11th Annual Symposium on User Interface Soft-ware and Technology (UIST ’00). San Diego, CA, ACM Press, New York: 139–148. IntertekGroup (2002). Leveraging Unstructured Data in Investment Management. Ipeirotis, P. G., Gravano, L., and Sahami, M. (2001). Probe, Count, and Classify: Categorizing Hidden Web Databases. In Proceedings of SIGMOD-01, ACM International Conference on Management of Data. W. G. Aref, ed. Santa Barbara, CA, ACM Press, New York: 67–78. Ittner, D. J., Lewis, D. D., and Ahn, D. D. (1995). Text Categorization of Low Quality Images. In Proceedings of SDAIR-95, 4th Annual Symposium on Document Analysis and Information Retrieval. Las Vegas, NV, ISRI, University of Nevada, Las Vegas, NV: 301–315. Iwayama, M., and Tokunaga, T. (1994). A Probabilistic Model for Text Categorization: Based on a Single Random Variable with Multiple Values. In Proceedings of ANLP-94, 4th Conference on Applied Natural Language Processing. Stuttgart, Germany, Association for Computa-tional Linguistics, Morristown, NJ: 162–167. Iwayama, M., and Tokunaga, T. (1995a). Cluster-Based Text Categorization: A Comparison of Category Search Strategies. In Proceedings of SIGIR-95, 18th ACM International Confer-ence on Research and Development in Information Retrieval. E. A. Fox, P . Ingwersen, and R. Fidel, eds. Seattle, ACM Press, New York: 273–281. Iwayama, M., and Tokunaga, T. (1995b). Hierarchical Bayesian Clustering for Automatic Text Classification. In Proceedings of IJCAI-95, 14th International Joint Conference on Artificial 358 Bibliography Intelligence. C. E. Mellish, ed. Montreal, Morgan Kaufmann Publishers, San Francisco: 1322–1327. Iwazume, M., Takeda, H., and Nishida, T. (1996). Ontology-Based Information Gathering and Text Categorization from the Internet. In Proceedings of IEA/AIE-96, 9th International Conference in Industrial and Engineering Applications of Artificial Intelligence and Expert Systems. T. Tanaka, S. Ohsuga, and M. Ali, eds. Fukuoka, Japan: 305–314. Iyer, R. D., Lewis, D. D., Schapire, R. E., Singer, Y., and Singhal, A. (2000). Boosting for Document Routing. In Proceedings of CIKM-00, 9th ACM International Conference on Information and Knowledge Management. A. Agah, J. Callan, and E. Rundensteiner, eds. McLean, VA, ACM Press, New York: 70–77. Jacobs, P. S. (1992). Joining Statistics with NLP for Text Categorization. In Proceedings of ANLP-92, 3rd Conference on Applied Natural Language Processing. M. Bates and O. Stock, eds. Trento, Italy, Association for Computational Linguistics, Morristown, NJ: 178–185. Jacobs, P. S. (1993). “Using Statistical Methods to Improve Knowledge-Based News Catego-rization.” IEEE Expert 8(2): 13–23. Jain, A., and Dubes, R. (1988). Algorithms for Clustering Data. Englewood Cliffs, NJ, Prentice Hall. Jain, A. K., and Chellappa, R., eds. (1993). Markov Random Fields: Theory and Application. Boston, Academic Press. Jain, A. K., Murty, M. N., and Flynn, P. J. (1999). “Data Clustering: A Review.” ACM Com-puting Surveys 31(3): 264–323. Jensen, J. R. (1996). Introductory Digital Image Processing – A Remote Sensing Perspective. Englewood Cliffs, NJ, Prentice Hall. Jerding, D., and Stasko, J. (1995). The Information Mural: A Technique for Displaying and Navigating Large Information Spaces. In Proceedings of Information Visualization ’95 Symposium. Atlanta, IEEE Computer Society, Washington, DC: 43. Jo, T. C. (1999a). “News Article Classification Based on Categorical Points from Keywords in Backdata.” In Computational Intelligence for Modelling, Control and Automation. M. Mohammadian, ed. Amsterdam, IOS Press: 211–214. Jo, T. C. (1999b). “News Articles Classification Based on Representative Keywords of Cate-gories.” In Computational Intelligence for Modelling, Control and Automation. M. Moham-madian, ed. Amsterdam, IOS Press: 194–198. Jo, T. C. (1999c). Text Categorization with the Concept of Fuzzy Set of Informative Keywords. In Proceedings of FUZZ-IEEE ’99, IEEE International Conference on Fuzzy Systems. Seoul, KR, IEEE Computer Society Press, Los Alamitos, CA: 609–614. Joachims, T. (1997). A Probabilistic Analysis of the Rocchio Algorithm with TFIDF for Text Categorization. In Proceedings of ICML-97, 14th International Conference on Machine Learning. D. H. Fisher, ed. Nashville, TN, Morgan Kaufmann Publishers, San Francisco: 143–151. Joachims, T. (1998). Text Categorization with Support Vector Machines: Learning with Many Relevant Features. In Proceedings of ECML-98, 10th European Conference on Machine Learning. C. Nedellec and C. Rouveirol, eds. Chemnitz, Germany, Springer-Verlag, Heidelberg: 137–142. Joachims, T. (1999). Transductive Inference for Text Classification Using Support Vector Machines. In Proceedings of ICML-99, 16th International Conference on Machine Learn-ing. I. Bratko and S. Dzeroski, eds. Bled, Morgan Kaufmann Publishers, San Francisco: 200–209. Joachims, T. (2000). Estimating the Generalization Performance of a SVM Efficiently. In Pro-ceedings of ICML-00, 17th International Conference on Machine Learning. P . Langley, ed. Stanford, CA, Morgan Kaufmann Publishers, San Francisco: 431–438. Joachims, T. (2001). A Statistical Learning Model of Text Classification with Support Vector Machines. In Proceedings of SIGIR-01, 24th ACM International Conference on Research and Development in Information Retrieval. W. B. Croft, D. J. Harper, D. H. Kraft, and J. Zobel, eds. New Orleans, ACM Press, New York: 128–136. Bibliography 359 Joachims, T. (2002). Learning to Classify Text Using Support Vector Machines. Dordrecht, Kluwer Academic Publishers. Joachims, T., Cristianini, N., and Shawe-Taylor, J. (2001). Composite Kernels for Hypertext Categorisation. In Proceedings of ICML-01, 18th International Conference on Machine Learning. C. Brodley and A. Danyluk, eds. Williams College, MA, Morgan Kaufmann Publishers, San Francisco: 250–257. Joachims, T., Freitag, D., and Mitchell, T. M. (1997). WebWatcher: A Tour Guide for the Word Wide Web. In Proceedings of IJCAI-97, 15th International Joint Conference on Artificial Intelligence. M. E. Pollack, ed. Nagoya, Japan, Morgan Kaufmann Publishers, San Francisco: 770–775. Joachims, T., and Sebastiani, F. (2002). “Guest Editors’ Introduction to the Special Issue on Automated Text Categorization.” Journal of Intelligent Information Systems 18(2/3): 103– 105. Johnson, B., and Shneiderman, B. (1991). “Treemaps: A Space-Filling Approach to the Visu-alization of Hierarchical Information.” In Proceedings of IEEE Visualization ’91 Confer-ence. G. Nielson and L. Rosenblum, eds. San Diego, CA, IEEE Computer Society Press, Los Alamitos, CA: 284–291. Juan, A., and Vidal, E. (2002). “On the Use of Bernoulli Mixture Models for Text Classifica-tion.” Pattern Recognition 35(12): 2705–2710. Junker, M., and Abecker, A. (1997). Exploiting Thesaurus Knowledge in Rule Induction for Text Classification. In Proceedings of RANLP-97, 2nd International Conference on Recent Advances in Natural Language Processing. Tzigov Chark, Bulgaria: 202–207. Junker, M., and Dengel, A. (2001). Preventing Overfitting in Learning Text Patterns for Doc-ument Categorization. In Proceedings of ICAPR-01, 2nd International Conference on Advances in Pattern Recognition. S. Singh, N. A. Murshed, and W. G. Kropatsch, eds. Rio de Janeiro, Springer-Verlag, Heidelberg: 137–146. Junker, M., and Hoch, R. (1998). “An Experimental Evaluation of OCR Text Representa-tions for Learning Document Classifiers.” International Journal on Document Analysis and Recognition 1(2): 116–122. Junker, M., Sintek, M., and Rinck, M. (2000). Learning for Text Categorization and Information Extraction with ILP. In Proceedings of the 1st Workshop on Learning Language in Logic. Bled, Slovenia, Springer-Verlag, Heidelberg: 247–258. Kaban, A., and Girolami, M. (2002). “A Dynamic Probabilistic Model to Visualise Topic Evolution in Text Streams.” Journal of Intelligent Information Systems 18(2/3): 107– 125. Kamada, T., and Kawai, S. (1989). “An Algorithm for Drawing General Undirected Graphs.” Information Processing Letters 31: 7–15. Kar, G., and White, L. J. (1978). “A Distance Measure for Automated Document Classification by Sequential Analysis.” Information Processing and Management 14(2): 57–69. Karrer,A.,andScacchi,W.(1990).RequirementsforanExtensibleObject-OrientedTree/Graph Editor. In Proceedings of ACM SIGGRAPH Symposium on User Interface Software and Technology. Snowbird, UT, ACM Press, New York: 84–91. Karypis, G., and Han, E.-H. (2000). Fast Supervised Dimensionality Reduction Algorithm with Applications to Document Categorization and Retrieval. In Proceedings of CIKM-00, 9th ACM International Conference on Information and Knowledge Management. A. Agah, J. Callan, and E. Rundensteiner, eds. McLean, VA, ACM Press, New York: 12–19. Kaski, S. (1997). Data Exploration Using Self-Organizing Maps. Tech thesis, Helsinki Univer-sity of Technology. Kaski, S., Honkela, T., Lagus, K., and Kohonen, T. (1998). “WEBSOM-Self-Organizing Maps of Document Collections.” Neurocomputing 21: 101–117. Kaski,S.,Lagus,K.,Honkela,T.,andKohonen,T. (1998).“StatisticalAspectsoftheWEBSOM System in Organizing Document Collections.” Computing Science and Statistics 29: 281– 290. 360 Bibliography Kawatani, T. (2002). Topic Difference Factor Extraction between Two Document Sets and Its Application to Text Categorization. In Proceedings of SIGIR-02, 25th ACM Interna-tional Conference on Research and Development in Information Retrieval. K. Jarvelin, M. Beaulieu, R. Baeza-Yates, and S. H. Myaeng, eds. Tampere, Finland, ACM Press, New York: 137–144. Kehagias, A., Petridis, V., Kaburlasos, V. G., and Fragkou, P . (2003). “A Comparison of Word-and Sense-Based Text Categorization Using Several Classification Algorithms.” Journal of Intelligent Information Systems 21(3): 227–247. Kehler, A. (1997). Probabilistic Coreference in Information Extraction. In Proceedings of the 2nd Conference on Empirical Methods in Natural Language Processing. C. Cardie and R. Weischedel, eds. Providence, RI, Association for Computational Linguistics, Somerset, NJ: 163–173. Keim, D. (2002). “Information Visualization and Visual Data Mining.” IEEE Transactions on Visualization and Computer Graphics 8(1): 1–8. Keller, B. (1992). A Logic for Representing Grammatical Knowledge. In Proceedings of Euro-pean Conference on Artificial Intelligence. Vienna, Austria, John Wiley and Sons, New York: 538–542. Kennedy, C., and Boguraev, B. (1997). Anaphora for Everyone: Pronominal Anaphora Res-olution Without a Parser. In Proceedings of the 16th International Conference on Compu-tational Linguistics. J. Tsujii, ed. Copenhagen, Denmark, Association for Computationsl Linguistics, Morristown, NJ: 113–118. Keogh, E., and Smyth, P. (1997). A Probabilistic Approach to Fast Pattern Matching in Time Series Databases. In Proceedings of the 3rd International Conference on Knowledge Dis-covery and Data Mining (KDD’97). D. Heckerman, H. Mannila, D. Pregibon, and R. Uthu-rusamy, eds. Newport Beach, CA, AAAI Press, Menlo Park, CA: 24–30. Kessler, B., Nunberg, G., and Schutze, H. (1997). Automatic Detection of Text Genre. In Pro-ceedings of ACL-97, 35th Annual Meeting of the Association for Computational Linguistics. P. R. Cohen and W. Wahlster, eds. Madrid, Morgan Kaufmann Publishers, San Francisco: 32–38. Khmelev, D. V., and Teahan, W. J. (2003). A Repetition Based Measure for Verification of Text Collections and for Text Categorization. In Proceedings of SIGIR-03, 26th ACM Inter-national Conference on Research and Development in Information Retrieval. C. Clarke, G. Cormack, J. Callan, D. Hawking, and A. Smeaton, eds. Toronto, ACM Press, New York: 104–110. Kim, H. (2002). “Predicting How Ontologies for the Semantic Web Will Evolve.” CACM 45(2): 48–54. Kim, J.-T., and Moldovan, D. I. (1995). “Acquisition of Linguistic Patterns for Knowledge-Based Information Extraction.” TKDE 7(5): 713–724. Kim, Y.-H., Hahn, S.-Y., and Zhang, B.-T. (2000). Text Filtering by Boosting Naive Bayes Classifiers. In Proceedings of SIGIR-00, 23rd ACM International Conference on Research and Development in Information Retrieval. N. J. Belkin, P . Ingwersen, and M. K. Leong, eds. Athens, ACM Press, New York: 168–75. Kindermann, J., Paass, G., and Leopold, E. (2001). Error Correcting Codes with Optimized Kullback–Leibler Distances for Text Categorization. In Proceedings of ECML-01, 12th Euro-pean Conference on Machine Learning. L. de Raedt and A. Siebes, eds. Freiburg, Germany, Springer-Verlag, Heidelberg: 266–275. Kindermann, R., and Snell, J. L. (1980). Markov Random Fields and Their Applications. Prov-idence, RI, American Mathematical Society. Klas, C.-P., and Fuhr, N. (2000). A New Effective Approach for Categorizing Web Documents. In Proceedings of BCSIRSG-00, 22nd Annual Colloquium of the British Computer Society Information Retrieval Specialist Group. Cambridge, UK, BCS, Swinden, UK. Klebanov, B., and Wiemer-Hastings, P. M. (2002). Using LSA for Pronominal Anaphora Res-olution. In Proceedings of the 3rd International Conference on Computational Linguistics and Intelligent Text Processing. A. F. Gelbukh, ed. Mexico City, Springer, Berlin: 197–199. Bibliography 361 Klingbiel, P. H. (1973a). “Machine-Aided Indexing of Technical Literature.” Information Stor-age and Retrieval 9(2): 79–84. Klingbiel, P. H. (1973b). “A Technique for Machine-Aided Indexing.” Information Storage and Retrieval 9(9): 477–494. Klinkenberg, R., and Joachims, T. (2000). Detecting Concept Drift with Support Vector Machines. In Proceedings of ICML-00, 17th International Conference on Machine Learn-ing. P. Langley, ed. Stanford, CA, Morgan Kaufmann Publishers, San Francisco: 487– 494. Kloesgen, W. (1992). “Problems for Knowledge Discovery in Databases and Their Treatment in the Statistics Interpreter EXPLORA.” International Journal for Intelligent Systems 7(7): 649–673. Kloesgen, W. (1995a). “Efficient Discovery of Interesting Statements in Databases.” Journal of Intelligent Information Systems 4: 53–69. Kloesgen, W. (1995b). “EXPLORA: A Multipattern and Multistrategy Discovery Assistant.” In Advances in Knowledge Discovery and Data Mining. U. Fayyad, G. Piatetsky-Shapiro, and R. Smyth, eds. Cambridge, MA, MIT Press: 249–271. Kloesgen, W., and Zytkow, J., eds. (2002). Handbook of Data Mining and Knowledge Discov-ery. Oxford, UK, Oxford University Press. Kloptchenko, A., Eklund, T., Back, B., Karlson, J., Vanharanta, H., and Visa, A. (2002). “Com-bining Data and Text Mining Techniques for Analyzing Financial Reports.” International Journal of Intelligent Systems in Accounting, Finance, and Management 12(1): 29–41. Knorr, E., Ng, R., and Tucatov, V. (2000). “Distance Based Outliers: Algorithims and Appli-cations.” The VLDB Journal 8(3): 237–253. Knorz, G. (1982). A Decision Theory Approach to Optimal Automated Indexing. In Pro-ceedings of SIGIR-82, 5th ACM International Conference on Research and Develop-ment in Information Retrieval. G. Salton and H.-J. Schneider, eds. Berlin, Springer-Verlag, Heidelberg: 174–193. Ko, Y., Park, J., and Seo, J. (2002). Automatic Text Categorization Using the Importance of Sentences. In Proceedings of COLING-02, 19th International Conference on Computa-tional Linguistics. Taipei, Taiwan, Association for Computational Linguistics, Morristown NJ/Morgan Kaufmann Publishers, San Francisco, CA: 1–7. Ko, Y., and Seo, J. (2000). Automatic Text Categorization by Unsupervised Learning. In Pro-ceedings of COLING-00, 18th International Conference on Computational Linguistics. Saarbr¨ ucken, Germany, Association for Computational Linguistics, Morristown, NJ: 453– 459. Ko, Y., and Seo, J. (2002). Text Categorization Using Feature Projections. In Proceedings of COLING-02, 19th International Conference on Computational Linguistics. Taipei, Taiwan, Association for Computational Linguistics, Morristown, NJ/Morgan Kauffman Publishers, San Francisco, CA: 453–459. Kobsa, A. (2001). An Empirical Comparison of Three Commercial Information Visualization Systems. In Proceedings of Infovis 2001, IEEE Symposium on Information Visualization. San Diego, CA, IEEE Computer Society Press, Washington, DC: 123. Koehn, P. (2002). Combining Multiclass Maximum Entropy Text Classifiers with Neural Net-work Voting. In Proceedings of PorTAL-02, 3rd International Conference on Advances in Natural Language Processing. Faro, Portugal, Springer, Berlin: 125–132. Kohlhase, M. (2000). “Model Generation for Discourse Representation Theory.” In Proceed-ings of the 14th European Conference on Artificial Intelligence. W. Horn, ed. Berlin, IOS Press, Amsterdam: 441–445. Kohonen, T. (1981). Automatic Formation of Topological Maps of Patterns in a Self-Organizing System. In Proceedings of 2SCIA, 2nd Scandinavian Conference on Image Analysis. E. Uja and O. Simula, eds. Helsinki, Finland, Suomen Hahmontunnistustutkimuksen Seura r.y.: 214–220. Kohonen, T. (1982). “Analysis of Simple Self-Organizing Process.” Biological Cybernetics 44(2): 135–140. 362 Bibliography Kohonen, T. (1995). Self-Organizing Maps. Berlin, Springer-Verlag. Kohonen, T. (1997). Exploration of Very Large Databases by Self-Organizing Maps. In Pro-ceedings of ICNN ’97, International Conference on Neural Networks. Houston, TX, IEEE Service Center Press, Piscataway, NJ: 1–6. Kohonen, T. (1998). Self-Organization of Very Large Document Collections: State of the Art. In Proceedings of ICANN98, 8th International Conference on Artificial Neural Networks. M. Niklasson and T. Zienkke, eds. Sk¨ ovde, Sweden, Springer-Verlag, London: 65–74. Kohonen, T., Kaski, S., Lagus, K., and Honkela, T. (1996). Very Large Two-Level SOM for the Browsing of Newsgroups. In Proceedings of ICANN96, International Conference on Artificial Neural Networks. Bochum, Germany, Springer-Verlag, Berlin: 269–274. Kohonen, T., Kaski, S., Lagus, K., Salojarvi, J., Honkela, T., Paatero, V., and Saarela, A. (1999). “Self-Organization of a Massive Text Document Collection.” In Kohonen Maps. E. Oja and S. Kaski, eds. Amsterdam, Elsevier: 171–182. Koike, H. (1993). “The Role of Another Spatial Dimension in Software Visualization.” ACM Transactions on Information Systems 11(3): 266–286. Koike, H. (1995). “Fractal Views: A Fractal-Based Method for Controlling Information Dis-play.” ACM Transactions on Information Systems 13(3): 305–323. Koike, T., and Rzhetsky, A. (2000). “A Graphic Editor for Analyzing Signal-Transduction Pathways.” Gene 259: 235–244. Kolcz, A., Prabakarmurthi, V., and Kalita, J. K. (2001). String Match and Text Extraction: Sum-marization as Feature Selection for Text Categorization. In Proceedings of CIKM-01, 10th ACM International Conference on Information and Knowledge Management. W. Paques, L. Liu, and D. Grossman, eds. Atlanta, ACM Press, New York: 365–370. Koller, D., and Sahami, M. (1997). Hierarchically Classifying Documents Using Very Few Words. In Proceedings of ICML-97, 14th International Conference on Machine Learning. D. H. Fisher, ed. Nashville, TN, Morgan Kaufmann Publishers, San Francisco: 170–178. Kongovi, M., Guzman, J. C., and Dasigi, V. (2002). Text Categorization: An Experiment Using Phrases. In Proceedings of ECIR-02, 24th European Colloquium on Information Retrieval Research. F. Cresteni, M. Girotami, and C. J. v. Rijsbergen, eds. Glasgow, Springer-Verlag, Heidelberg: 213–228. Kopanis, I., Avouris, N. M., and Daskalaki, S. (2002). The Role of Knowledge Mining in a Large Scale Data Mining Project. In Proceedings of Methods and Applications of Artifi-cial Intelligence, 2nd Hellenic Conference on AI. I. P . Vlahavas and C. Spyropoulos, eds. Thessaloniki, Greece, Springer-Verlag, Berlin: 288–299. Koppel, M., Argamon, S., and Shimoni, A. R. (2002). “Automatically Categorizing Written Texts by Author Gender.” Literary and Linguistic Computing 17(4): 401–412. Kosmynin, A., and Davidson, I. (1996). Using Background Contextual Knowledge for Docu-ment Representation. In Proceedings of PODP-96, 3rd International Workshop on Principles of Document Processing. C. Nicholas and D. Wood, eds. Palo Alto, CA, Springer-Verlag, Heidelberg: 123–133. Koster, C. H., and Seutter, M. (2003). Taming Wild Phrases. In Proceedings of ECIR-03, 25th European Conference on Information Retrieval. F. Sebastiani, ed. Pisa, Italy, Springer-Verlag, Heidelberg: 161–176. Krauthammer, M., Rzhetsky, A., Morozov, P., and Friedman, C. (2000). “Using BLAST for Identifying Gene and Protein Names in Journal Articles.” Gene 259: 245–252. Krier, M., and Zacc, F. (2002). “Automatic Categorization Applications at the European Patent Office.” World Patent Information 24: 187–196. Krishnapuram, R., Chitrapura, K., and Joshi, S. (2003). Classification of Text Documents Based on Minimum System Entropy. In Proceedings of ICML-03, 20th International Conference on Machine Learning. Washington, DC, Morgan Kaufmann Publishers, San Francisco: 384– 391. Kupiec, J. (1992). “Robust Part-of-Speech Tagging Using a Hidden Markov model.” Computer Speech and Language 6: 225–243. Bibliography 363 Kushmerick, N. (1997). Wrapper Induction for Information Extraction. Ph.D. thesis, Depart-ment of Computer Science and Engineering, University of Washington. Kushmerick, N. (2000). “Wrapper Induction: Efficiency and Expressiveness.” Artificial Intel-ligence 118(1–2): 15–68. Kushmerick, N. (2002). Finite-State Approaches to Web Information Extraction. In Proceed-ings of the 3rd Summer Convention on Information Extraction in the Web Era: Natural Language Communication for Knowledge Acquisition and Intelligent Information Agents. M. Pazienza, ed. Rome, Springer-Verlag, Berlin: 77–91. Kushmerick, N., Johnston, E., and McGuinness, S. (2001). Information Extraction by Text Classification. In Proceedings of IJCAI-01 Workshop on Adaptive Text Extraction and Mining. Seattle, Morgan Kaufmann Publishers, San Francisco. Kushmerick, N., Weld, D. S., and Doorenbos, R. B. (1997). Wrapper Induction for Informa-tion Extraction. In Proceedings of the 15th International Joint Conference on Artificial Intelligence (IJCAI). Nagoya, Japan, Morgan Kaufmann Publishers, San Francisco: 729– 735. Kwok, J. T. (1998). Automated Text Categorization Using Support Vector Machine. In Pro-ceedings of ICONIP ’98, 5th International Conference on Neural Information Processing. Kitakyushu, Japan: 347–351. Kwon, O.-W., Jung, S.-H., Lee, J.-H., and Lee, G. (1999). Evaluation of Category Features and Text Structural Information on a Text Categorization Using Memory-Based Reasoning. In Proceedings of ICCPOL-99, 18th International Conference on Computer Processing of Oriental Languages. Tokushima, Japan: 153–158. Kwon, O.-W., and Lee, J.-H. (2003). “Text Categorization Based on k-nearest Neighbor Approach for Web Site Classification.” Information Processing and Management 39(1): 25–44. Labrou, Y., and Finin, T. (1999). Yahoo! as an Ontology: Using Yahoo! Categories to Describe documents. In Proceedings of CIKM-99, 8th ACM International Conference on Information and Knowledge Management. Kansas City, MO, ACM Press, New York: 180–187. Lafferty, J., McCallum, A., and Pereira, F. (2001). Conditional Random Fields: Probabilistic Models for Segmenting and Labeling Sequence Data. In Proceedings of 18th International Conference on Machine Learning. Williamstown, MA, Morgan Kaufmann Publisher, San Francisco: 282–289. Lager, T. (1998). Logic for Part-of-Speech Tagging and Shallow Parsing. In Proceedings of NODALIDA ’98. Copenhagen, Denmark, Center for Sprogteknologi, Univio Copenhagen, Copenhagen. Lagus, K. (1998). Generalizability of the WEBSOM Method to Document Collections of Var-ious Types. In Proceedings of 6th European Congress on Intelligent Techniques and Soft Computing (EUFIT’98). Aachen, Germany, Verlag Mainz, Mainz: 210–215. Lagus, K. (2000a). Text Mining with the WEBSOM. D. Sc. (Tech) thesis, Department of Com-puter Science and Engineering, Helsinki University of Technology. Lagus, K. (2000b). Text Retrieval Using Self-Organized Document Maps. Technical Report A61, Laboratory of Computer and Information Science, Helsinki University of Technology. Lagus, K., Honkela, T., Kaski, S., and Kohonen, T. (1999). “WEBSOM for Textual Data Mining.” Artificial Intelligence Review 13(5/6): 345–364. Lai, K.-Y., and Lam, W. (2001). Meta-Learning Models for Automatic Textual Document Cate-gorization. In Proceedings of PAKDD-01, 5th Pacific-Asia Conference on Knowledge Dis-covery and Data Mining. D. Cheung, Q. Li, and G. Williams, eds. Hong Kong, Springer Verlag, Heidelberg: 78–89. Lai, Y.-S., and Wu, C.-H. (2002). “COLUMN: Meaningful Term Extraction and Discriminative Term Selection in Text Categorization via Unknown-Word Methodology.” ACM Transac-tions on Asian Language Information Processing 1(1): 34–64. Lam, S. L., and Lee, D. L. (1999). Feature Reduction for Neural Network Based Text Catego-rization. In Proceedings of DASFAA-99, 6th IEEE International Conference on Database 364 Bibliography Advanced Systems for Advanced Application. A. L. Chen and F. H. Lochovsky, eds. Hsinchu, Taiwan, IEEE Computer Society Press, Los Alamitos, CA: 195–202. Lam, W., and Ho, C. Y. (1998). Using a Generalized Instance Set for Automatic Text Cate-gorization. In Proceedings of SIGIR-98, 21st ACM International Conference on Research and Development in Information Retrieval. W. B. Croft, A. Moffat, C. J. van Rijsergen, R. Wilkinson, and J. Zobel, eds. Melbourne, Australia, ACM Press, New York: 81–89. Lam, W., and Lai, K.-Y. (2001). A Meta-Learning Approach for Text Categorization. In Pro-ceedings of SIGIR-01, 24th ACM International Conference on Research and Development in Information Retrieval. W. B. Croft, D. J. Harper, D. H. Kraft, and J. Zobel, eds. New Orleans, ACM Press, New York: 303–309. Lam, W., Low, K. F., and Ho, C. Y. (1997). Using a Bayesian Network Induction Approach for Text Categorization. In Proceedings of IJCAI-97, 15th International Joint Conference on Artificial Intelligence. M. E. Pollack, ed. Nagoya, Japan, Morgan Kaufmann Publishers, San Francisco: 745–750. Lam, W., Ruiz, M. E., and Srinivasan, P. (1999). “Automatic Text Categorization and Its Appli-cations to Text Retrieval.” IEEE Transactions on Knowledge and Data Engineering 11(6): 865–879. Lamping, J., and Rao, R. (1994). Laying Out and Visualizing Large Trees Using a Hyperbolic Space. In Proceedings of the ACM UIST (UIST ’94). P . Szekely, ed. Marina Del Rey, CA, ACM Press, New York: 13–14. Lamping, L., Rao, R., and Pirolli, P. (1995). A Focus-Context Technique Based on Hyperbolic Geometry for Visualizing Large Hierarchies. In Proceedings of the ACM SIGCHI Confer-ence on Human Factors in Computer Systems. I. Katz, R. Mack, L. Marks, M. B. Rosson, and J. Nielsen, eds. Denver, CO, ACM Press, New York: 401–408. Landau, D., Feldman, R., Aumann, Y., Fresko, M., Lindell, Y., Liphstat, O., and Zamir, O. (1998). TextVis: An Integrated Visual Environment for Text Mining. In Proceedings of the 2nd European Symposium on Principles of Data Mining and Knowledge Discovery (PKDD98). Nantes, France, Springer-Verlag, Heidelberg: 56–64. Landauer, T. K., Foltz, P. W., and Laham, D. (1998). “Introduction to Latent Semantic Anal-ysis.” Discourse Processes 25: 259–284. Lang, K. (1995). NewsWeeder: Learning to Filter Netnews. In Proceedings of ICML-95, 12th International Conference on Machine Learning. A. Prieditis and S. J. Russell, eds. Lake Tahoe, NV, Morgan Kaufmann Publishers, San Francisco: 331–339. Lanquillon, C. (2000). Learning from Labeled and Unlabeled Documents: A Comparative Study on Semi-Supervised Text Classification. In Proceedings of PKDD-00, 4th European Conference on Principles of Data Mining and Knowledge Discovery. D. A. Zighed, H. J. Komorowsky and J. M. Zytkow, eds. Lyon, France, Springer-Verlag, Heidelberg: 490–497. Lappin, S., and Leass, H. J. (1994). “An Algorithm for Pronominal Anaphora Resolution.” Computational Linguistics 20(4): 535–561. Larkey, L. S. (1998). Automatic Essay Grading Using Text Categorization Techniques. In Pro-ceedings of SIGIR-98, 21st ACM International Conference on Research and Development in Information Retrieval. W. B. Croft, A. Moffat, C. J. v. Rijsbergen, R. Wilkinson, and J. Zobel, eds. Melbourne, Australia, ACM Press, New York: 90–95. Larkey, L. S. (1999). A Patent Search and Classification System. In Proceedings of DL-99, 4th ACM Conference on Digital Libraries. E. A. Fox and N. Rowejeds, eds. Berkeley, CA, ACM Press, New York: 179–187. Larkey, L. S., and Croft, W. B. (1996). Combining Classifiers in Text Categorization. In Pro-ceedings of SIGIR-96, 19th ACM International Conference on Research and Development in Information Retrieval. H. P. Frei, D. Harmon, P . Schaubie, and R. Wilkinson, eds. Zurich, ACM Press, New York: 289–297. Lavelli, A., Califf, M. E., Ciravegna, F., Freitag, D., Giuliano, C., Kushmerick, N., and Romano, L. (2004). A Critical Survey of the Methodology for IE Evaluation. In Proceedings of the 4th Bibliography 365 International Conference on Language Resources and Evaluation. Lisbon, ELRA, Paris: 1655–1658. Lavelli, A., Magnini, B., and Sebastiani, F. (2002). Building Thematic Lexical Resources by Bootstrapping and Machine Learning. In Proceedings of the LREC 2002 Workshop on Lin-guistic Knowledge Acquisition and Representation: Bootstrapping Annotated Language Data. Las Palmas, Canary Islands, ELRA, Paris: 53–62. Lee, K. H., Kay, J., Kang, B. H., and Rosebrock, U. (2002). A Comparative Study on Sta-tistical Machine Learning Algorithms and Thresholding Strategies for Automatic Text Cat-egorization. In Proceedings of PRICAI-02, 7th Pacific Rim International Conference on Artificial Intelligence. Milshizuka and A. Sattar, eds. Tokyo, Springer-Verlag, Heidelberg: 444–453. Lee, M. D. (2002). Fast Text Classification Using Sequential Sampling Processes. In Pro-ceedings of the 14th Australian Joint Conference on Artificial Intelligence. M. Stumptner, D. Corbett and M. J. Brooks, eds. Adelaide, Australia, Springer-Verlag, Heidelberg: 309– 320. Lee, Y.-B., and Myaeng, S. H. (2002). Text Genre Classification with Genre-Revealing and Subject-Revealing Features. In Proceedings of SIGIR-02, 25th ACM International Conference on Research and Development in Information Retrieval. M. Beavliev, E. Beazz-Yakes, S. Myaeng, and K. Jarvelin, eds. Tampere, Finland, ACM Press, New York: 145–150. Leek, T. R. (1997). Information Extraction Using Hidden Markov Models. Master’s thesis, Computer Science Department, University of California San Diego. Lehnert, W., Soderland, S., Aronow, D., Feng, F., and Shmueli, A. (1994). “Inductive Text Classification for Medical Applications.” Journal of Experimental and Theoretical Artificial Intelligence 7(1): 49–80. Lent,B.,Agrawal,R.,andSrikant,R.(1997).DiscoveringTrendsinTextDatabases.InProceed-ings of the 3rd Annual Conference on Knowledge Discovery and Data Mining (KDD-97) D. Heckerman, H. Mannila, D. Pregibon, and R. Uthrysamy, eds. Newport Beach, CA, AAAI Press, Menlo Park, CA: 227–230. Leopold, E., and Kindermann, J. (2002). “Text Categorization with Support Vector Machines: How to Represent Texts in Input Space?” Machine Learning 46(1/3): 423–444. Lesk, M. (1997). Practical Digital Libraries: Books, Bytes and Bucks. San Francisco, Morgan Kaufmann Publishers. Leung, C.-H., and Kan, W.-K. (1997). “A Statistical Learning Approach to Automatic Indexing of Controlled Index Terms.” Journal of the American Society for Information Science 48(1): 55–67. Leung, Y. K., and Apperley, M. D. (1994). “A Review and Taxonomy of Distortion-Oriented Presentation Techniques.” ACM Transactions on Computer–Human Interaction 1(2): 126– 160. Lewin, I., Becket, R., Boye, J., Carter, D., Rayner, M., and Wir’en, M. (1999). Language Processing for Spoken Dialogue Systems: Is Shallow Parsing Enough? Technical Report CRC-074, SRI, Cambridge, MA: 107–110. Lewis, D., and Catlett, J. (1994). Heterogeneous Uncertainty Sampling for Supervised Learning. In Proceedings of the 11th International Conference on Machine Learning. New Brunswick, NJ, Morgan Kaufmann Publishers, San Francisco: 148–156. Lewis, D. D. (1991). Data Extraction as Text Categorization: An Experiment with the MUC-3 Corpus. In Proceedings of MUC-3, 3rd Message Understanding Conference. San Diego, CA, Morgan Kaufmann Publishers, San Francisco: 245–255. Lewis, D. D. (1992a). An Evaluation of Phrasal and Clustered Representations on a Text Catego-rization task. In Proceedings of SIGIR-92, 15th ACM International Conference on Research and Development in Information Retrieval. N. Belkin, P . Ingwersen, and A. M. Pejtersen, eds. Copenhagen, ACM Press, New York: 37–50. 366 Bibliography Lewis, D. D. (1992b). Representation and Learning in Information Retrieval. Ph.D. thesis, Department of Computer Science, University of Massachusetts. Lewis, D. D. (1995a). Evaluating and Optmizing Autonomous Text Classification Systems. In Proceedings of SIGIR-95, 18th ACM International Conference on Research and Develop-ment in Information Retrieval. E. A. Fox, P. Ingwersen, and R. Fidel, eds. Seattle, ACM Press, New York: 246–254. Lewis, D. D. (1995b). “A Sequential Algorithm for Training Text Classifiers: Corrigendum and Additional Data.” SIGIR Forum 29(2): 13–19. Lewis, D. D. (1995c). The TREC-4 Filtering Track: Description and Analysis. In Proceedings of TREC-4, 4th Text Retrieval Conference. D. K. Warmon, and E. M. Voorhees, eds. Gaithers-burg, MD, National Institute of Standards and Technology, Gaithersburg, MD: 165–180. Lewis, D. D. (1997). “Reuters-21578 Text Categorization Test Collection. Distribution 1.0.” AT&T Labs-Research, Lewis, D. D. (1998). Naive (Bayes) at Forty: The Independence Assumption in Information Retrieval. In Proceedings of ECML-98, 10th European Conference on Machine Learning. C. N’edellec and C. Rouveirol, eds. Chemnitz, Germany, Springer-Verlag, Heidelberg: 4–15. Lewis, D. D. (2000). Machine Learning for Text Categorization: Background and Characteris-tics. In Proceedings of the 21st Annual National Online Meeting. M. E. Williams, ed. New York, Information Today, Medford, OR: 221–226. Lewis, D. D., and Gale, W. A. (1994). A Sequential Algorithm for Training Text Classifiers. In Proceedings of SIGIR-94, 17th ACM International Conference on Research and Develop-ment in Information Retrieval. W. B. Croft and C. J. v. Rijsbergen, eds. Dublin, Springer-Verlag, Heidelberg: 3–12. Lewis, D. D., and Hayes, P. J. (1994). “Guest Editors’ Introduction to the Special Issue on Text Categorization.” ACM Transactions on Information Systems 12(3): 231. Lewis, D. D., Li, F., Rose, T., and Yang, Y. (2003). “Reuters Corpus Volume I as a Text Categorization Test Collection.” Journal of Machine Learning Research 5: 361–391. Lewis, D. D., and Ringuette, M. (1994). A Comparison of Two Learning Algorithms for Text Categorization. In Proceedings of SDAIR-94, 3rd Annual Symposium on Document Anal-ysis and Information Retrieval. Las Vegas, NV, IRSI, University of Nevada, Las Vegas: 81–93. Lewis, D. D., Schapire, R. E., Callan, J. P., and Papka, R. (1996). Training Algorithms for Linear Text Classifiers. In Proceedings of SIGIR-96, 19th ACM International Conference on Research and Development in Information Retrieval. Zurich, ACM Press, New York: 298–306. Lewis, D. D., Stern, D. L., and Singhal, A. (1999). Attics: A Software Platform for On-line Text Classification. In Proceedings of SIGIR-99, 22nd ACM International Conference on Research and Development in Information Retrieval. M. A. Wearst, F. Gey, and R. Tong, eds. Berkeley, CA, ACM Press, New York: 267–268. Li,C.,Wen,J.-R.,andLi,H.(2003).TextClassificationUsingStochasticKeywordGeneration.In Proceedings of ICML-03, 20th International Conference on Machine Learning. Washington, DC, Morgan Kaufmann Publishers, San Francisco: 469–471. Li, F., and Yang, Y. (2003). A Loss Function Analysis for Classification Methods in Text Catego-rization. In Proceedings of ICML-03, 20th International Conference on Machine Learning. Washington, DC, Morgan Kaufmann Publishers, San Francisco: 472–479. Li,H.,andYamanishi,K.(1997).DocumentClassificationUsingaFiniteMixtureModel.InPro-ceedings of ACL-97, 35th Annual Meeting of the Association for Computational Linguistics. P. Cohen and W. Wahlster, eds. Madrid, Morgan Kaufmann Publishers, San Francisco: 39– 47. Li, H., and Yamanishi, K. (1999). Text Classification Using ESC-Based Stochastic Decision Lists. In Proceedings of CIKM-99, 8th ACM International Conference on Information and Knowledge Management. Kansas City, MO, ACM Press, New York: 122–130. Bibliography 367 Li, H., and Yamanishi, K. (2002). “Text Classification Using ESC-based Stochastic Decision Lists.” Information Processing and Management 38(3): 343–361. Li, W., Lee, B., Krausz, F., and Sahin, K. (1991). Text Classification by a Neural Network. In Proceedings of the 23rd Annual Summer Computer Simulation Conference. D. Pace, ed. Baltimore, Society for Computer Simulation, San Diego, CA: 313–318. Li, X., and Roth, D. (2002). Learning Question Classifiers. In Proceedings of COLING-02, 19th International Conference on Computational Linguistics. Taipei, Taiwan, Morgan Kaufmann Publishers, San Francisco: 556–562. Li, Y. H., and Jain, A. K. (1998). “Classification of Text Documents.” The Computer Journal 41(8): 537–546. Liang, J., Phillips, I., Ha, J., and Haralick, R. (1996). Document Zone Classification Using the Sizes of Connected Components. In Proceedings of Document Recognition III. San Jose, CA, SPIE, Bellingham, WA: 150–157. Liang, J., Phillips, I., and Haralick, R. (1997). Performance Evaluation of Document Layout Analysis on the UW Data Set. In Proceedings of Document Recognition IV. San Jose, CA, SPIE, Bellingham, WA: 149–160. Liao, Y., and Vemuri, V. R. (2002). Using Text Categorization Techniques for Intrusion Detec-tion. In Proceedings of the 11th USENIX Security Symposium. D. Boneh, ed. San Francisco: 51–59. Liddy, E. D., Paik, W., and Yu, E. S. (1994). “Text Categorization for Multiple Users Based on Semantic Features from a Machine-Readable Dictionary.” ACM Transactions on Infor-mation Systems 12(3): 278–295. Liere, R., and Tadepalli, P. (1997). Active Learning with Committees for Text Categorization. In Proceedings of AAAI-97, 14th Conference of the American Association for Artificial Intelligence. Providence, RI, AAAI Press, Menlo Park, CA: 591–596. Liere, R., and Tadepalli, P. (1998). Active Learning with Committees: Preliminary Results in Comparing Winnow and Perceptron in Text Categorization. In Proceedings of CONALD-98, 1st Conference on Automated Learning and Discovery. Pittsburgh, PA, AAAI Press, Menlo Park, CA. Lim, J. H. (1999). Learnable Visual Keywords for Image Classification. In Proceedings of DL-99, 4th ACM Conference on Digital Libraries. E. A. Fox and N. Rowe, eds. Berkeley, CA, ACM Press, New York: 139–145. Lima, L. R. D., Laender, A. H., and Ribeiro-Neto, B. A. (1998). A Hierarchical Approach to the Automatic Categorization of Medical Documents. In Proceedings of CIKM-98, 7th ACM International Conference on Information and Knowledge Management. G. Gardarin, G. J. French, N. Pissinou, K. Makki, and L. Bouganim, eds. Bethesda, MD, ACM Press, New York: 132–139. Lin, D. (1995). “A Dependency-based Method for Evaluating Broad-Coverage Parsers.” Natural Language Engineering 4(2): 97–114. Lin, X. (1992). Visualization for the Document Space. In Proceedings of Visualization ’92. Los Alamitos, CA, Center for Computer Legal Research, Pace University/IEEE Computer Society Press, Piscataway, NJ: 274–281. Lin, X. (1997). “Map Displays for Information Retrieval.” Journal of the American Society for Information Science 48: 40–54. Lin, X., Soergel, D., and Marchionini, G. (1991). A Self-Organizing Semantic Map for Infor-mation Retrieval. In Proceedings of 14th Annual International ACM/SIGIR Conference on Research & Development in Information Retrieval. Chicago, ACM Press, New York: 262–269. Litman, D. J., and Passonneau, R. J. (1995). Combining Multiple Knowledge Sources for Dis-course Segmentation. In Proceedings of the 33rd Annual Meeting of the Association for Computational Linguistics. Cambridge, MA, Association for Computational Linguistics, Morristown, NJ: 108–115. 368 Bibliography Liu, H., Selker, T., and Lieberman, H. (2003). Visualizing the Affective Structure of a Text Document. In Proceedings of the Conference on Human Factors in Computing Systems (CHI 2003). Fort Lauderdale, FL, ACM Press, New York: 740–741. Liu, X., and Croft, W. B. (2003). “Statistical Language Modeling for Information Retrieval.” Annual Review of Information Science and Technology 39. Liu, Y., Carbonell, J., and Jin, R. (2003). A New Pairwise Ensemble Approach for Text Clas-sification. In Proceedings of ECML-03, 14th European Conference on Machine Learning. N. Lavrac, D. Gamberger, L. Todorovski, and H. Blockeel, eds. Cavtat-Dubrovnik, Croatia, Springer-Verlag, Heidelberg: 277–288. Liu, Y., Yang, Y., and Carbonell, J. (2002). Boosting to Correct the Inductive Bias for Text Clas-sification. In Proceedings of CIKM-02, 11th ACM International Conference on Information and Knowledge Management. McLean, VA, ACM Press, New York: 348–355. Lodhi, H., Saunders, C., Shawe-Taylor, J., Cristianini, N., and Watkins, C. (2002). “Text Clas-sification Using String Kernels.” Journal of Machine Learning Research 2: 419–444. Lodhi, H., Shawe-Taylor, J., Cristianini, N., and Watkins, C. J. (2001). “Discrete Kernels for Text Categorisation.” In Advances in Neural Information Processing Systems. T. K. Leen, T. Ditterich, and V. Tresp, eds. Cambridge, MA, MIT Press: 563–569. Lombardo, V. (1991). Parsing Dependency Grammars. In Proceedings of the 2nd Congress of the Italian Association for Artificial Intelligence on Trends in Artificial Intelligence. E. Ardizzone, S. Gaglio, and F. Sorbello, eds. Springer-Verlag, London: 291–300. Lorrain, F., and White, H. C. (1971). “Structural Equivalence of Individuals in Social Net-works.” Journal of Mathematical Sociology 1: 49–80. Lu, S. Y., and Fu, K. S. (1978). “A Sentence-to-Sentence Clustering Procedure for Pattern Analysis.” IEEE Translations on Systems, Man and Cybernetics. 8: 381–389. Di., Nunzio, G. M., and Micarelli, A. (2003). Does a New Simple Gaussian Weighting Approach Perform Well in Text Categorization? In Proceedings of IJCAI-03, 18th International Joint Conference on Artificial Intelligence. Acapulco, Morgan Kaufmann Publishers, San Francisco: 581–586. Macskassy, S. A., Hirsh, H., Banerjee, A., and Dayanik, A. A. (2001).Using Text Classifiers for Numerical Classification. In Proceedings of IJCAI-01, 17th International Joint Conference onArtificialIntelligence.B.Nebel,ed.Seattle,MorganKaufmannPublishers,SanFrancisco: 885–890. Macskassy, S. A., Hirsh, H., Banerjee, A., and Dayanik, A. A. (2003). “Converting Numerical Classification into Text Classification.” Artificial Intelligence 143(1): 51–77. Maderlechner, G., Suda, P., and Bruckner, T. (1997). “Classification of Documents by Form and Content.” Pattern Recognition Letters 18(11/13): 1225–1231. Maedche, A., and Staab, S. (2001). “Learning Ontologies for the Semantic Web.” IEEE Intel-ligent Systems 16(2), Special Issue on the Semantic Web. Maltese, G., and Mancini, F. (1991). A Technique to Automatically Assign Parts-of-Speech to Words Taking into Account Word-Ending Information through a Probabilistic Model. In Proceedings of Eurospeech 1991. Genoa, Italy, Genovalle Institute fuer Kommunikations Forschung und Phonetick, Bonn, Germany: 753–756. Manevitz, L. M., and Yousef, M. (2001). “One-Class SVMs for Document Classification.” Journal of Machine Learning Research 2: 139–154. Mannila, H., and Toivonen, H. (1996). On an Algorithm for Finding All Interesting Sentences. In Proceedings of the 13th European Meeting on Cybernetics and Systems Research. R. Trappl, ed. Vienna, Austria, University of Helsinki, Department of Computer Science: 973– 978. Mannila, H., Toivonen, H., and Verkamo, A. (1994). Efficient Algorithms for Discovering Association Rules. In Proceedings of Knowledge Discovery in Databases, AAAI Workshop (KDD’94). U. M. Eayyad and R. Uthurusamy, eds. Seattle, AAAI Press, Menlo Park, CA: 181–192. Bibliography 369 Mannila, H., Toivonen, H., and Verkamo, A. (1995). Discovering Frequent Episodes in Sequences. In Proceedings of the 1st International Conference of Knowledge Discovery and Data Mining. Montreal, AAAI Press, Menlo Park, CA: 210–215. Mannila, H., Toivonen, H., and Verkamo, A. (1997). “Discovery of Frequent Episodes in Event Sequences.” Data Mining and Knowledge Discovery 1(3): 259–289. Manning, C., and Schutze, H. (1999). Foundations of Statistical Natural Language Processing. Cambridge, MA, MIT Press. Marchionini, G. (1995). Information Seeking in Electronic Environments. Cambridge, UK, Cambridge University Press. Marcus, M. P., Santorini, B., and Marcinkiewicz, M. A. (1994). “Building a Large Anno-tated Corpus of English: The Penn Treebank.” Computational Linguistics 19(2): 313– 330. Maron, M. E. (1961). “Automatic Indexing: An Experimental Inquiry.” Journal of the Associ-ation for Computing Machinery 8(3): 404–417. Martin, P. (1995). Using the WordNet Concept Catalog and a Relation Hierarchy for Knowledge Acquisition. In Proceedings of Peirce’95, 4th International Workshop on Peirce. E. Ellis and R. Levinson, eds. Santa Cruz, CA, University of Maryland, MD: 36–47. Masand, B. (1994). Optimising Confidence of Text Classification by Evolution of Symbolic Expressions. In Advances in Genetic Programming. K. E. Kinnear, ed. Cambridge, MA, MIT Press: 459–476. Masand, B., Linoff, G., and Waltz, D. (1992). Classifying News Stories Using Memory-Based Reasoning. In Proceedings of SIGIR-92, 15th ACM International Conference on Research and Development in Information Retrieval. N. Belkin, P . Ingwersen, and A. M. Pejtersen, eds. Copenhagen, Denmark, ACM Press, New York: 59–65. Masui, T., Minakuchi, M., Borden, G., and Kashiwagi, K. (1995). Multiple-View Approach for Smooth Information Retrieval. In Proceedings of the ACM Symposium on User Interface Software and Technology (UIST’95). G. Robertson, ed. Pittsburgh, ACM Press, New York: 199–206. Matsuda, K., and Fukushima, T. (1999). Task-Oriented World Wide Web Retrieval by Document-Type Classification.In Proceedings of CIKM-99, 8th ACM International Confer-ence on Information and Knowledge Management. S. Gruch, ed. Kansas City, MO, ACM Press, New York: 109–113. McCallum, A., Freitag, D., and Pereira, F. (2000). Maximum Entropy Markov Models for Infor-mation Extraction and Segmentation. In Proceedings of the 17th International Conference on Machine Learning. Stanford University, Palo Alto, CA, Morgan Kaufmann Publishers, San Francisco: 591–598. McCallum, A., and Jensen, D. (2003). A Note on the Unification of Information Extraction and Data Mining Using Conditional-Probability, Relational Models. In Proceedings of IJCAI03 Workshop on Learning Statistical Models from Relational Data. D. Jensen and L. Getoo, eds. Acapulco, Mexico, published electronically by IJCAI and AAAI: 79–87. McCallum, A. K., and Nigam, K. (1998). Employing EM in Pool-Based Active Learning for Text Classification. In Proceedings of ICML-98, 15th International Conference on Machine Learning. J. W. Shavlik, ed. Madison, WI, Morgan Kaufmann Publishers, San Francisco: 350–358. McCallum, A. K., Rosenfeld, R., Mitchell, T. M., and Ng, A. Y. (1998). Improving Text Classifi-cation by Shrinkage in a Hierarchy of Classes. In Proceedings of ICML-98, 15th International Conference on Machine Learning. J. W. Shavlik, ed. Madison, WI, Morgan Kaufmann Pub-lishers, San Francisco: 359–367. McCarthy, J. F., and Lehnert, W. G. (1995). Using Decision Trees for Coreference Resolution. In Proceedings of the 14th International Joint Conference on Artificial Intelligence (IJCAI-95). C. Mellish, ed. Montreal, Morgan Kaufmann Publishers, San Francisco: 1050– 1055. 370 Bibliography Melancon, G., and Herman, I. (2000). DAG Drawing from an Information Visualiza-tion Perspective. In Proceedings of Data Visualization ’00, Amsterdam, Springer-Verlag, Heidelberg: 3–12. Meretakis, D., Fragoudis, D., Lu, H., and Likothanassis, S. (2000). Scalable Association-Based Text Classification. In Proceedings of CIKM-00, 9th ACM International Conference on Information and Knowledge Management. A. Agoh, J. Callan, S. Gauch, and E. Runden-steiner, eds. McLean, VA, ACM Press, New York: 373–374. Merialdo, B. (1994). “Tagging English text with a Probabilistic Model.” Computational Lin-guistics 20(2): 155–172. Merkl, D. (1998). “Text Classification with Self-Organizing Maps: Some Lessons Learned.” Neurocomputing 21(1/3): 61–77. Miller, D., Schwartz, R., Weischedel, R., and Stone, R. (1999). Named Entity Extraction from Broadcast News. In Proceedings of DARPA Broadcast News Workshop. Herndon, VA, Morgan Kaufmann Publishers, San Francisco: 37–40. Miller, N., Wong, P. C., Brewster, M., and Foote, H. (1998). TOPIC ISLANDS(TM): A Wavelet-Based Text Visualization System. In Proceedings of IEEE Visualization ’98. Research Tri-angle Park, NC, ACM Press, New York: 189–196. Mitkov, R. (1998). Robust Pronoun Resolution with Limited Knowledge. In Proceedings of the 39th Annual Meeting on Association for Computational Linguistics. Montreal, Canada, Association for Computational Linguistics, Morristown, NJ: 869–875. Mladenic, D. (1998a). Feature Subset Selection in Text Learning. In Proceedings of ECML-98, 10th European Conference on Machine Learning. C. Nedellec and C. Rouveirol, eds. Chemnitz, Germany, Springer-Verlag, London: 95–100. Mladenic, D. (1998b). Machine Learning on Non-homogeneous, Distributed Text Data. Ph.D. thesis, J. Stefan Institute, University of Ljubljana. Mladenic, D. (1998c). Turning Yahoo! into an Automatic Web Page Classifier. In Proceedings of ECAI-98, 13th European Conference on Artificial Intelligence. H. Prade, ed. Brighton, UK, John Wiley and Sons, Chichester, UK: 473–474. Mladenic, D. (1999). “Text Learning and Related Intelligent Agents: A Survey.” IEEE Intel-ligent Systems 14(4): 44–54. Mladenic, D., and Grobelnik, M. (1998). Word Sequences as Features in Text-Learning. In Proceedings of ERK-98, 7th Electrotechnical and Computer Science Conference. Ljubljana, Slovenia: 145–148. Mladenic, D., and Grobelnik, M. (1999). Feature Selection for Unbalanced Class Distribution and Naive Bayes. In Proceedings of ICML-99, 16th International Conference on Machine Learning. I. Bratko and S. Dzeroski, eds. Bled, Slovenia, Morgan Kaufmann Publishers, San Francisco: 258–267. Mladenic, D., and Grobelnik, M. (2003). “Feature Selection on Hierarchy of Web Documents.” Decision Support Systems 35(1): 45–87. Mock, K. (1998). A Comparison of Three Document Clustering Algorithms: TreeCluster, Word Intersection GQF, and Word Intersection Hierarchical Agglomerative Clustering. Technical Report, Intel Architecture Labs. Moens, M.-F., and Dumortier, J. (2000). “Text Categorization: The Assignment of Subject Descriptors to Magazine Articles.” Information Processing and Management 36(6): 841– 861. Montes-y-Gomez, M., Gelbukh, A., and Lopez-Lopez, A. (2001a). Discovering Association Rules in Semi-Structured Data Sets. In Proceedings of the Workshop on Knowledge Discov-ery from Distributed, Dynamic, Heterogeneous, Autonomous Data and Knowledge Source at 17th International Joint Conference on Artificial Intelligence (IJCAI’2001). Seattle, AAAI Press, Menlo Park, CA: 26–31. Montes-y-Gomez, M., Gelbukh, A., and Lopez-Lopez, A. (2001b). “Mining the News: Trends, Associations and Deviations.” Computaˆ ci´ on y Sistemas 5(1): 14–25. Bibliography 371 Mooney, R. J., and Roy, L. (2000). Content-Based Book Recommending Using Learning for Text Categorization. Proceedings of DL-00, 5th ACM Conference on Digital Libraries. San Antonio, TX, ACM Press, New York: 195–204. Moschitti, A. (2003). A Study on Optimal Parameter Tuning for Rocchio Text Classifier. In Pro-ceedings of ECIR-03, 25th European Conference on Information Retrieval. F. Sebastiani, ed. Pisa, Italy, Springer-Verlag, Heidelberg: 420–435. Mostafa, J., and Lam, W. (2000). “Automatic Classification Using Supervised Learning in a Medical Document Filtering Application.” Information Processing and Management 36(3): 415–444. Moulinier, I. (1997). Feature Selection: A Useful Preprocessing Step. In Proceedings of BCSIRSG-97, 19th Annual Colloquium of the British Computer Society Information Retrieval Specialist Group. J. Furner and D. Harper, eds. Aberdeen, UK, Springer-Verlag, Heidelberg, Germany: 1–11. Moulinier, I., and Ganascia, J.-G. (1996). “Applying an Existing Machine Learning Algorithm to Text Categorization.” In Connectionist, Statistical, and Symbolic Approaches to Learning for Natural Language Processing. S. Wermter, E. Riloff, and G. Scheler, eds. Heidelberg, Springer-Verlag: 343–354. Moulinier, I., Raskinis, G., and Ganascia, J.-G. (1996). Text Categorization: A Symbolic Approach. In Proceedings of SDAIR-96, 5th Annual Symposium on Document Analysis and Information Retrieval. Las Vegas, NV, ISRI, University of Nevada, Las Vegas: 87–99. Munoz, M., Punyakanok, V., Roth, D., and Zimak, D. (1999). A Learning Approach to Shallow Parsing. Technical Report 2087, University of Illinois at Urbana-Champaign: 18. Munzner, T., and Burchard, P. (1995). Visualizing the Structure of the World Wide Web in 3D Hyperbolic Space. In Proceedings of VRML ’95. San Diego, CA, ACM Press, New York: 33–38. Mutton, P.(2004).“Inferringand Visualizing SocialNetworks on InternetRelayChat.”Journal of WSCG 12(1–3). Mutton, P., and Golbeck, J. (2003). Visualization of Semantic Metadata and Ontologies. In Pro-ceedings of Information Visualization 2003 (IV03). London, UK, IEEE Computer Society Press, Washington, DC: 300. Mutton, P., and Rodgers, P. (2002). Spring Embedder Preprocessing for WWW Visualization. In Proceedings of 6th International Conference on Information Visualization. London, IEEE Computer Society Press, Washington, DC: 744–749. Myers, K., Kearns, M., Singh, S., and Walker, M. A. (2000). A Boosting Approach to Topic Spotting on Subdialogues. In Proceedings of ICML-00, 17th International Conference on Machine Learning. P. Langley, ed. Stanford, CA, Morgan Kaufmann Publishers, San Francisco: 655–662. Nahm, U., and Mooney, R. (2000). A Mutually Beneficial Integration of Data Mining and Information Extraction. In Proceedings of the 17th Conference of Artificial Intelligence, AAAI-2000. Austin, TX, AAAI Press, Menlo Park, CA: 627-632. Nahm, U., and Mooney, R. (2001). Mining Soft Matching Rules from Text Data. In Proceedings of the 7th International Joint Conference on Artificial Intelligence. Seattle, WA, Morgan Kaufmann Publishers, San Francisco: 978–992. Nahm, U. Y., and Mooney, R. J. (2002). Text Mining with Information Extraction. In Proceed-ings of the AAAI 2002 Spring Symposium on Mining Answers from Texts and Knowledge Bases. S. Harabagio and V. Chaudhri, eds. Palo Alto, CA, AAAI Press, Menlo Park, CA: 60– 68. Nardiello, P ., Sebastiani F., and Sperduti, A. (2003). Discretizing Continuous Attributes in AdaBoost for Text Categorization. In Proceedings of ECIR-03, 25th European Conference on Information Retrieval. F. Sebastiani, ed. Pisa, Italy, Springer-Verlag, Heidelberg: 320– 334. 372 Bibliography Nasukawa, T., and Nagano, T. (2001). “Text Analysis and Knowledge Mining System.” IBM Systems Journal 40(4): 967–984. Neuhaus, P., and Broker, N. (1997). The Complexity of Recognition of Linguistically Adequate Dependency Grammars. In Proceedings of the 35th Annual Meeting of the Association for Computational Linguistics and 8th Conference of the European Chapter of the Association for Computational Linguistics. P. R. Cohen and W. Wahlster, eds. Somerset, NJ, Association for Computational Linguistics: 337–343. Ng, G. K.-C. (2000). Interactive Visualisation Techniques for Ontology Development. Ph.D. thesis, Department of Computer Science, University of Manchester. Ng, H. T., Goh, W. B., and Low, K. L. (1997). Feature Selection, Perceptron Learning, and a Usability Case Study for Text Categorization. In Proceedings of SIGIR-97, 20th ACM International Conference on Research and Development in Information Retrieval. N. J. Belkin, A. Narasimhalu, W. Hersh, and P. Willett, eds. Philadelphia, ACM Press, New York: 67–73. Ng, V., and Cardie, C. (2002). Improving Machine Learning Approaches to Coreference Res-olution. In Proceedings of the 40th Annual Meeting of the Association for Computational Linguistics. Philadelphia, Association for Computational Linguistics, Morristown, NJ: 104– 111. Ng, V., and Cardie, C. (2003). Bootstrapping Coreference Classifiers with Multiple Machine Learning Algorithms. In Proceedings of the 2003 Conference on Empirical Methods in Nat-ural Language Processing (EMNLP-2003), Sappora, Japan, Association for Computational Linguistics, Morristown, NJ: 113–120. Nigam, K. (2001). Using Unlabeled Data to Improve Text Classification. Ph.D. thesis, Computer Science Department, Carnegie Mellon University. Nigam, K., and Ghani, R. (2000). Analyzing the Applicability and Effectiveness of Co-training. InProceedingsofCIKM-00,9thACMInternationalConferenceonInformationandKnowl-edge Management. A. Agah, J. Callan, S. Gauch, and E. Rundensteiner, eds. McLean, VA, ACM Press, New York: 86–93. Nigam, K., McCallum, A. K., Thrun, S., and Mitchell, T. M. (1998). Learning to Classify Text from Labeled and Unlabeled Documents. In Proceedings of AAAI-98, 15th Conference of the American Association for Artificial Intelligence. Madison, WI, AAAI Press, Menlo Park, CA: 792–799. Nigam, K., McCallum, A. K., Thrun, S., and Mitchell, T. M. (2000). “Text Classification from Labeled and Unlabeled Documents Using EM.” Machine Learning 39(2/3): 103–134. Niyogi,D.(1995).AKnowledge-BasedApproachtoDerivingLogicalStructurefromDocument Images. Doctoral dissertation, State University of New York, Buffalo. Niyogi, D., and Srihari, S. (1996). Using Domain Knowledge to Derive the Logical Structure of Documents. In Proceedings of Document Recognition III. SPIE, Bellingham, WA: 114–125. Noik, E. (1996). Dynamic Fisheye Views: Combining Dynamic Queries and Mapping with Database View Definition. Ph.D. thesis, Graduate Department of Computer Science, Uni-versity of Toronto. Nong, Y., ed. (2003). The Handbook of Data Mining. Boston, Lawrence Erlbaum Associates. Oh, H.-J., Myaeng, S. H., and Lee, M.-H. (2000). A Practical Hypertext Categorization Method Using Links and Incrementally Available Class Information. In Proceedings of SIGIR-00, 23rd ACM International Conference on Research and Development in Infor-mation Retrieval. N. Belkin, P. Ingwersen, and M.-K. Leong, eds. Athens, ACM Press, New York: 264–271. Ontrup, J., and Ritter, H. (2001a). Hyperbolic Self-Organizing Maps for Semantic Navigation. In Proceedings of NIPS 2001. T. Dietterich, S. Becker, and Z. Chahramani, eds. Vancouver, MIT Press, Cambridge, MA: 1417–1424. Ontrup, J., and Ritter, H. (2001b). Text Categorization and Semantic Browsing with Self-Organizing Maps on Non-Euclidean Spaces. In Proceedings of PKDD-01, 5th European Bibliography 373 Conference on Principles and Practice of Knowledge Discovery in Databases. Freiburg, Germany, Springer-Verlag, Heidelberg: 338–349. Paijmans, H. (1999). “Text Categorization as an Information Retrieval Task.” The South African Computer Journal. 31: 4–15. Paliouras, G., Karkaletsis, V., and Spyropoulos, C. D. (1999). Learning Rules for Large Vocabulary Word Sense Disambiguation. In Proceedings of IJCAI-99, 16th International Joint Conference on Artificial Intelligence. T. Dean, ed. Stockholm, Morgan Kaufmann Publishers, San Francisco: 674–679. Pang, B., Lee, L., and Vaithyanathan, S. (2002). Thumbs Up? Sentiment Classification Using Machine Learning Techniques. In Proceedings of EMNLP-02, 7th Conference on Empirical Methods in Natural Language Processing. Philadelphia, Association for Computational Linguistics, Morristown, NJ: 79–86. Patel-Schneider, P., and Simeon, J. (2002). Building the Semantic Web on XML. In Proceedings of the 1st International Semantic Web Conference (ISWC). I. Horrocks and J. Hendler, eds. Sardinia, Italy, Springer-Verlag, Heidelberg, Germany: 147–161. Pattison, T., Vernik, R., Goodburn, D., and Phillips, M. (2001). Rapid Assembly and Deploy-ment of Domain Visualisation Solutions. In Proceedings of Australian Symposium on Infor-mation Visualization, ACM International Conference. Sydney, Australian Computer Soci-ety, Darlinghurst, Australia: 19–26. Pedersen, T., and Bruce, R. (1997). Unsupervised Text Mining. Dallas, TX, Department of Computer Science and Engineering, Southern Methodist University. Peng, F., and Schuurmans, D. (2003). Combining Naive Bayes n-gram and Language Models for Text Classification. In Proceedings of ECIR-03, 25th European Conference on Information Retrieval. F. Sebastiani, ed. Pisa, Italy, Springer-Verlag, Heidelberg: 335–350. Peng, F., Schuurmans, D., and Wang, S. (2003). Language and Task Independent Text Catego-rization with Simple Language Models. In Proceedings of HLT-03, 3rd Human Language Technology Conference. Edmonton, CA, ACL Press, Morgan Kaufmann Publishers, San Francisco: 110–117. Petasis, G., Cucchiarelli, A., Velardi, P., Paliouras, G., Karkaletsis, V., and Spyropoulos, C. D. (2000). Automatic Adaptation of Proper Noun Dictionaries through Cooperation of Machine Learning and Probabilistic Methods. In Proceedings of SIGIR-00, 23rd ACM International Conference on Research and Development in Information Retrieval. N. Belkin, Peter lngwersen, and M.-K. Leong, eds. Athens, ACM Press, New York: 128–135. Peters, C., and Koster, C. H. (2002). Uncertainty-Based Noise Reduction and Term Selection in Text Categorization. In Proceedings of ECIR-02, 24th European Colloquium on Information Retrieval Research. F. Crestani, M. Girolomi, and C. J. v. Rijsbergen, eds. Glasgow, Springer-Verlag, London: 248–267. Phillips, W., and Riloff, E. (2002). Exploiting Strong Syntactic Heuristics and Co-Training to Learn Semantic Lexicons. In Proceedings of the 2002 Conference on Empirical Methods in Natural Language Processing (EMNLP 2002). Philadelphia, Association for Computational Linguistics: 125–132. Piatetsky-Shapiro, G., and Frawley, W. J., eds. (1991). Knowledge Discovery in Databases. Cambridge, MA, MIT Press. Pierre, J. M. (2002). Mining Knowledge from Text Collections Using Automatically Gener-ated Metadata. In Proceedings of the 4th International Conference on Practical Aspects of Knowledge Management (PAKM-02). D. Karagiannis and Reimer, eds. Vienna, Austria, Springer-Verlag, London: 537–548. Pollard, C., and Sag, I. A. (1994). Head-Driven Phrase Structure Grammar. Chicago, University of Chicago Press and CSLI Publications. Porter, A. (2002). Text Mining. Technology Policy and Assessment Center, Georgia Institute of Technology. Pottenger, W., and Yang, T.-h. (2001). Detecting Emerging Concepts in Textual Data Mining. Philadelphia, SIAM. 374 Bibliography Punyakanok, V., and Roth, D. (2000). Shallow Parsing by Inferencing with Classifiers. In Pro-ceedings of the 4th Conference on Computational Natural Language Learning and of the 2nd Learning Language in Logic Workshop. Lisbon, Association for Computational Lin-guistics, Somerset, NJ: 107–110. Pustejovsky, J., Castano, J., Zhang, J., Kotecki, M., and Cochran, B. (2002). Robust Relational Parsing over Biomedical Literature: Extracting Inhibit Relations. In Proceedings of the 2002 Pacific Symposium on Biocomputing (PSB-2002). Lihue, Hawaii, World Scientific Press, Hackensack, NJ: 362–373. Rabiner, L. R. (1986). “An Introduction to Hidden Markov Models.” IEEE ASSP Magazine 3(1): 4–16. Rabiner, L. R. (1990). “A Tutorial on Hidden Markov Models and Selected Applications in Speech Recognition.” In Readings in Speech Recognition. A. Waibel and K.-F. Lee, eds. Los Altos, CA, Morgan Kaufmann Publishers: 267–296. Ragas, H., and Koster, C. H. (1998). Four Text Classification Algorithms Compared on a Dutch Corpus. In Proceedings of SIGIR-98, 21st ACM International Conference on Research and Development in Information Retrieval. W. B. Croft, A. Moffat, C. J. v. Rijsbergen, R. Wilkinson and J. Zobel, eds. Melbourne,Australia, ACM Press, New York: 369–370. Rainsford, C., and Roddick, J. (2000). Visualization of Temporal Interval Association Rules. In Proceedings of the 2nd International Conference on Intelligent Data Engineering and Automated Learning. Hong Kong, Springer-Verlag, London: 91–96. Rajman, M., and Besancon, R. (1997a). A Lattice Based Algorithm for Text Mining. Technical Report TR-LIA-LN1/97, Swiss Federal Institute of Technology. Rajman, M., and Besancon, R. (1997b). Text Mining: Natural Language Techniques and Text Mining Applications. In Proceedings of the 7th IFIP 2.6 Working Conference on Database Semantics (DS-7). Leysin, Switzerland, Norwell, MA. Rajman, M., and Besancon, R. (1998). Text Mining – Knowledge Extraction from Unstructured Textual Data. In Proceedings of the 6th Conference of the International Federation of Classification Societies. Rome: 473–480. Rambow, O., and Joshi, A. K. (1994). “A Formal Look at Dependency Grammars and Phrase-Structure Grammars, with Special Consideration of Word-Order Phenomena.” In Current Issues in Meaning-Text Theory. L. Wanner, ed. London, Pinter. Rao, R., and Card, S. (1994). The Table Lens: Merging Graphical and Symbolic Representations in an Interactive Focus + Context Visualization for Tabular Information. In Proceedings of the International Conference on Computer-Human Interaction ’94. Boston, MA, ACM Press, New York: 318–322. Rao, R., Card, S., Jellinek, H., Mackinlay, J., and Robertson, G. (1992). The Information Grid: A Framework for Information Retrieval and Retrieval-Centered Applications. In Proceedings of the 5th Annual Symposium on User Interface Software and Technology (UIST) ’92. Monterdy, CA, ACM Press, New York: 23–32. Raskutti, B., Ferra, H., and Kowalczyk, A. (2001). Second Order Features for Maximising Text Classification Performance. In Proceedings of ECML-01, 12th European Conference on Machine Learning. L. D. Raedt and P. A. Flach, eds. Freiburg, Germany, Springer-Verlag, London: 419–430. Rau, L. F., and Jacobs, P. S. (1991). Creating Segmented Databases from Free Text for Text Retrieval. In Proceedings of SIGIR-91, 14th ACM International Conference on Research and Development in Information Retrieval. Chicago, ACM Press, New York: 337–346. Reape, M. (1989). A Logical Treatment of Semi-free Word Order and Bounded Discontinuous Constituency. In Proceedings of the 4th Meeting of the European ACL. Monchester, UK, Association for Computational Linguistics, Morristown, NJ: 103–110. Rennie, J., and McCallum, A. K. (1999). Using Reinforcement Learning to Spider the Web Effi-ciently. In Proceedings of ICML-99, 16th International Conference on Machine Learning. I. Bratko and S. Dzeroski, eds. Bled, Slovenia, Morgan Kaufmann Publishers, San Francisco: 335–343. Bibliography 375 Rennie, J., Shih, L., Teevan, J., and Karger, D. (2003). Tackling the Poor Assumptions of Naive Bayes Text Classifiers. In Proceedings of ICML-03, 20th International Conference on Machine Learning. Washington, DC, Morgan Kaufmann Publishers, San Francisco: 616– 623. Reynar, J., and Ratnaparkhi, A. (1997). A Maximum Entropy Approach to Identifying Sentence Boundaries. In Proceedings of the 5th Conference on Applied Natural Language Processing. Washington, DC, Morgan Kaufmann Publishers, San Francisco: 16–19. Ribeiro-Neto, B., Laender, A. H. F., and Lima, L. R. D. (2001). “An Experimental Study in Automatically Categorizing Medical Documents.” Journal of the American Society for Information Science and Technology 52(5): 391–401. Rich, E., and LuperFoy, S. (1988). An Architecture for Anaphora Resolution. In ACL Pro-ceedings of the 2nd Conference on Applied Natural Language Processing. Austin, TX, Association for Computational Linguistics, Morristown, NJ: 18–24. Rijsbergen, C. J. v. (1979). Information Retrieval, 2nd ed. London, Butterworths. Riloff, E. (1993a). Automatically Constructing a Dictionary for Information Extraction Tasks. In Proceedings of the 11th National Congress on Artificial Intelligence. Washington, DC, AAAI/MIT Press, Menlo Park, CA: 811–816. Riloff, E. (1993b). Using Cases to Represent Context for Text Classification. In Proceedings of CIKM-93, 2nd International Conference on Information and Knowledge Management. Washington, DC, ACM Press, New York: 105–113. Riloff, E. (1994). Information Extraction as a Basis for Portable Text Classification Systems. Amherst, MA, Department of Computer Science, University of Massachusetts. Riloff, E. (1995). Little Words Can Make a Big Difference for Text Classification. In Proceed-ings of SIGIR-95, 18th ACM International Conference on Research and Development in Information Retrieval. E. A. Fox, P . Ingwersen, and R. Fidel, eds. Seattle, ACM Press, New York: 130–136. Riloff, E. (1996a). Automatically Generating Extraction Patterns from Untagged Text. In Pro-ceedings of the 13th National Conference on Artificial Intelligence. AAAI/MIT Press, Menlo Park, CA: 1044–1049. Riloff, E. (1996b). “Using Learned Extraction Patterns for Text Classification.” In Connec-tionist, Statistical, and Symbolic Approaches to Learning for Natural Language Processing. S. Wermter, E. Riloff, and G. Scheler, eds. Springer-Verlag, London: 275–289. Riloff, E., and Jones, R. (1999). Learning Dictionaries for Information Extraction by Multi-level Boot-Strapping. In Proceedings of the 16th National Conference on Artificial Intelligence. Orlando, AAAI Press/MIT Press, Menlo Park, CA: 1044–1049. Riloff, E., and Lehnert, W. (1994). “Information Extraction as a Basis for High-Precision Text Classification.” ACM Transactions on Information Systems, 12(3): 296–333. Riloff,E.,andLehnert,W.(1998).ClassifyingTextsUsingRelevancySignatures.InProceedings of AAAI-92, 10th Conference of the American Association for Artificial Intelligence. San Jose, CA, AAAI Press, Menlo Park, CA: 329–334. Riloff, E., and Lorenzen, J. (1999). “Extraction-Based Text Categorization: Generating Domain-Specific Role Relationships.” In Natural Language Information Retrieval. T. Strzalkowski, ed. Dordrecht, Kluwer Academic Publishers: 167–196. Riloff, E., and Schmelzenbach, M. (1998). An Empirical Approach to Conceptual Case Frame Acquisition. In Proceedings of the 6th Workshop on Very Large Corpora. E. Chemiak, ed. Montreal, Quebec, Association for Computational Linguistics, Morgan Kaufmann Publish-ers, San Francisco: 49–56. Riloff, E., and Shoen, J. (1995). Automatically Acquiring Conceptual Patterns Without an Automated Corpus. In Proceedings of the 3rd Workshop on Very Large Corpora. Boston, MA, Association for Computational Linguistics, Somerset, NJ: 148–161. Rindflesch, T. C., Hunter, L., and Aronson, A. R. (1999). Mining Molecular Binding Termi-nology from Biomedical Text. In Proceedings of the ’99 AMIA Symposium. Washington, DC, AMIA, Bethesda, MD: 127–131. 376 Bibliography Rindflesch, T. C., Tanabe, L., Weinstein, J. N., and Hunter, L. (2000). EDGAR: Extrac-tion of Drugs, Genes and Relations from the Biomedical Literature. In Proceedings of the 2000 Pacific Symposium on Biocomputing. Waikiki Beach, Hawaii, World Scientific Press, Hackensack, NJ: 517–528. Roark, B., and Johnson, M. (1999). Efficient Probabilistic Top-Down and Left-Corner Parsing. In Proceedings of the 37th Annual Meeting of the ACL. College Park, MD, Association for Computational Linguistics, Morristown, NJ: 421–428. Robertson, G., Mackinlay, J., and Card, S. (1991). Cone Trees: Animated 3D Visualizations of Hierarchical Information. In Proceedings of the ACM SIGCHI Conference on Human Factors in Computing Systems. New Orleans, ACM Press, New York: 189–194. Robertson, S. E., and Harding, P. (1984). “Probabilistic Automatic Indexing by Learning from Human Indexers.” Journal of Documentation 40(4): 264–270. Rodriguez, M. D. B., Gomez-Hidalgo, J. M., and Diaz-Agudo, B. (1997). Using WordNet to ComplementTrainingInformationinTextCategorization.InProceedingsofRANLP-97,2nd International Conference on Recent Advances in Natural Language Processing. R. Mitkov and N. Nikolov, eds. Tzigov Chark, Bulgaria, John Benjamins, Philadelphia: 353–364. Rokita, P. (1996). “Generating Depth-of-Field Effects in Virtual Reality Applications.” IEEE Computer Graphics and Applications 16(2): 18–21. Rose, T., Stevenson, M., and Whitehead, M. (2002). The Reuters Corpus Volume 1 – From Yes-terday’s News to Tomorrow’s Language Resources. In Proceedings of LREC-02, 3rd Inter-national Conference on Language Resources and Evaluation. Las Palmas, Spain, ELRA, Paris: 827–832. Rosenfeld, B., Feldman, R., Fresko, M., Schler, J., and Aumann, Y. (2004). TEG: A Hybrid Approach to Information Extraction. In Proceedings of CIKM 2004. Arlington, VA, ACM Press, New York: 589–596. Roth, D. (1998). Learning to Resolve Natural Language Ambiguities: A Unified Approach. In Proceedings of AAAI-98, 15th Conference of the American Association for Artificial Intelligence. Madison, WI, AAAI Press, Menlo Park, CA: 806–813. Ruiz, M., and Srinivasan, P. (2002). “Hierarchical Text Classification Using Neural Networks.” Information Retrieval 5(1): 87–118. Ruiz,M.E.,andSrinivasan,P.(1997).AutomaticTextCategorizationUsingNeuralNetworks.In Proceedings of the 8th ASIS/SIGCR Workshop on Classification Research. E. Efthimiadis, ed. Washington, DC, American Society for Information Science, Washington, DC: 59–72. Ruiz, M. E., and Srinivasan, P. (1999a). Combining Machine Learning and Hierarchical Index-ing Structures for Text Categorization. In Proceedings of the 10th ASIS/SIGCR Workshop on Classification Research. Washington, DC, American Society for Information Science, Washington, DC. Ruiz, M. E., and Srinivasan, P. (1999b). Hierarchical Neural Networks for Text Categorization. In Proceedings of SIGIR-99, 22nd ACM International Conference on Research and Devel-opment in Information Retrieval. M. A. Hearst, F. Gey, and R. Tong, eds. Berkeley, CA, ACM Press, New York: 281–282. Rzhetsky, A., Iossifov, I., Koike, T., Krauthammer, M., Kra, P ., Morris, M., Yu, H., Duboue, P. A., Weng, W., Wilbur, J. W., Hatzivassiloglou, V., and Friedman, C. (2004). “GeneWays: A System for Extracting, Analyzing, Visualizing, and Integrating Molecular Pathway Data.” Journal of Biomedical Informatics 37: 43–53. Rzhetsky, A., Koike, T., Kalachikov, S., Gomez, S. M., Krauthammer, M., Kaplan, S. H., Kra, P., Russo, J. J., and Friedman, C. (2000). “A Knowledge Model for Analysis and Simulation of Regulatory Networks.” Bionformatics 16: 1120–1128. Sabidussi, G. (1966). “The Centrality Index of a Graph.” Psychometrika 31: 581–603. Sable, C., and Church, K. (2001). Using Bins to Empirically Estimate Term Weights for Text Categorization. In Proceedings of EMNLP-01, 6th Conference on Empirical Methods in Natural Language Processing. Pittsburgh, Association for Computational Linguistics, Mor-ristown, NJ: 58–66. Bibliography 377 Sable, C. L., and Hatzivassiloglou, V. (1999). Text-Based Approaches for the Categorization of Images. In Proceedings of ECDL-99, 3rd European Conference on Research and Advanced Technology for Digital Libraries. S. Abitebout and A.-M. Vercoustre, eds. Paris, Springer-Verlag, Heidelberg: 19–38. Sable, C. L., and Hatzivassiloglou, V. (2000). “Text-Based Approaches for Non-topical Image Categorization.” International Journal of Digital Libraries 3(3): 261–275. Sahami, M., ed. (1998). Learning for Text Categorization. Papers from the 1998 AAAI Work-shop. Madison, WI, AAAI Press, Menlo Park, CA. Sahami, M., Hearst, M. A., and Saund, E. (1996). Applying the Multiple Cause Mixture Model to Text Categorization. In Proceedings of ICML-96, 13th International Conference on Machine Learning. L. Saitta, ed. Bari, Italy, Morgan Kaufmann Publishers, San Francisco: 435–443. Sahami, M., Yusufali, S., and Baldonado, M. Q. (1998). SONIA: A Service for Organizing Networked Information Autonomously. In Proceedings of DL-98, 3rd ACM Conference on Digital Libraries. I. Witten, R. Aksyn, and F. M. Shipman, eds. Pittsburgh, ACM Press, New York: 200–209. Sakakibara, Y., Misue, K., and Koshiba, T. (1996). “A Machine Learning Approach to Knowl-edge Acquisitions from Text Databases.” International Journal of Human Computer Inter-action 8(3): 309–324. Sakkis, G., Androutsopoulos, I., Paliouras, G., Karkaletsis, V., Spyropoulos, C. D., and Stamatopoulos, P. (2001). Stacking Classifiers for Anti-Spam Filtering of E-Mail. In Proceed-ings of EMNLP-01, 6th Conference on Empirical Methods in Natural Language Processing. Pittsburgh, Association for Computational Linguistics, Morristown, NJ: 44–50. Sakkis, G., Androutsopoulos, I., Paliouras, G., Karkaletsis, V., Spyropoulos, C. D., and Stam-atopoulos, P. (2003). “A Memory-Based Approach to Anti-Spam Filtering for Mailing Lists.” Information Retrieval 6(1): 49–73. Salamonsen, W., Mok, K., Kolatkar, P., and Subbiah, S. (1999). BioJAKE: A Tool for the Creation, Visualization and Manipulation of Metabolic Pathways. In Proceedings of the Pacific Symposium on Biocomputing. Hawaii, World Scientific Press, Hackensack NJ: 392– 400. Salton, G. (1989). Automatic Text Processing. Reading, MA, Addison-Wesley. Sanchez, S. N., Triantaphyllou, E., and Kraft, D. (2002). “A Feature Mining Based Approach for the Classification of Text Documents into Disjoint Classes.” Information Processing and Management 38(4): 583–604. Sarkar, M., and Brown, M. (1992). Graphical Fisheye Views of Graphs. In Proceedings of the ACM SIGCHI ’92 Conference on Human Factors in Computing Systems. Monterey, CA, ACM Press, New York: 83–91. Sasaki, M., and Kita, K. (1998). Automatic Text Categorization Based on Hierarchical Rules. In Proceedings of the 5th International Conference on Soft Computing and Information. Iizuka, Japan, World Scientific, Singapore: 935–938. Sasaki, M., and Kita, K. (1998). Rule-Based Text Categorization Using Hierarchical Categories. In Proceedings of SMC-98, IEEE International Conference on Systems, Man, and Cyber-netics. La Jolla, CA, IEEE Computer Society Press, Los Alamitos, CA: 2827–2830. Schapire, R. E., and Singer, Y. (2000). “BoosTexter: A Boosting-Based System for Text Cat-egorization.” Machine Learning 39(2/3): 135–168. Schapire, R. E., Singer, Y., and Singhal, A. (1998). Boosting and Rocchio Applied to Text Filtering. In Proceedings of SIGIR-98, 21st ACM International Conference on Research and Development in Information Retrieval. W. S. Croft, A. Moffat, C. J. v. Rijsbergen, R. Wilkinson, and J. Zobel, eds. Melbourne, Australia, ACM Press, New York: 215–223. Scheffer, T., and Joachims, T. (1999). Expected Error Analysis for Model Selection. In Pro-ceedings of ICML-99, 16th International Conference on Machine Learning. I. Bratko and S. Dzeroski, eds. Bled, Slovenia, Morgan Kaufmann Publishers, San Francisco: 361–370. Schneider, K.-M. (2003). A Comparison of Event Models for Naive Bayes Anti-Spam E-Mail Filtering. In Proceedings of EACL-03, 11th Conference of the European Chapter of the 378 Bibliography Association for Computational Linguistics. Budapest, Hungary, Association for Computa-tional Linguistics, Morristown, NJ: 307–314. Schutze, H. (1993). Part-of-Speech Induction from Scratch. In Proceedings of the 31st Annual Meeting of the Association for Computational Linguistics. Columbus, OH, Association for Computational Linguistics, Morristown, NJ: 251–258. Schutze, H. (1998). “Automatic Word Sense Discrimination.” Computational Linguistics 24(1): 97–124. Schutze, H., Hull, D. A., and Pedersen, J. O. (1995). A Comparison of Classifiers and Document Representations for the Routing Problem. In Proceedings of SIGIR-95, 18th ACM Interna-tional Conference on Research and Development in Information Retrieval. E. A. Fox, P. Ingwersen, and R. Fidel, eds. Seattle, ACM Press, New York: 229–237. Scott, J. (2000). Social Network Analysis: A Handbook. London, Sage Publications. Scott, S. (1998). Feature Engineering for a Symbolic Approach to Text Classification. Master’s thesis, Computer Science Department, University of Ottawa. Scott, S., and Matwin, S. (1999). Feature Engineering for Text Classification. In Proceedings of ICML-99, 16th International Conference on Machine Learning. I. Bratko and S. Dzeroski, eds. Bled, Slovenia, Morgan Kaufmann Publishers, San Francisco: 379–388. Sebastiani, F. (1999). A Tutorial on Automated Text Categorisation. In Proceedings of ASAI-99, 1st Argentinian Symposium on Artificial Intelligence. A. Anandi and R. Zunino, eds. Buenos Aires: 7–35. Sebastiani, F. (2002). “Machine Learning in Automated Text Categorization.” ACM Comput-ing Surveys 34(1): 1–47. Sebastiani, F., Sperduti, A., and Valdambrini, N. (2000). An Improved Boosting Algorithm and Its Application to Automated Text Categorization. In Proceedings of CIKM-00, 9th ACM International Conference on Information and Knowledge Management. A. Ayah, J. Callan, and E. Rundensteiner, eds. McLean, VA, ACM Press, New York: 78–85. Seidman, S. B. (1983). “Network Structure and Minimum Degree.” Social Networks 5: 269–287. Seymore, K., McCallum, A., and Rosenfeld, R. (1999). Learning Hidden Markov Model Struc-ture for Information Extraction. In AAAI 99 Workshop on Machine Learning for Informa-tion Extraction. Orlando, FL, AAAI Press, Menlo Park, CA: 37–42. Sha, F., and Pereira, F. (2003). Shallow Parsing with Conditional Random Fields. In Technical Report C15 TR MS-C15-02-35, University of Pennsylvania. Shin, C., Doermann, D., and Rosenfeld, A. (2001). “Classification of Document Pages Using Structure-Based Features.” International Journal on Document Analysis and Recognition 3(4): 232–247. Shneiderman, B. (1996). The Eyes Have It: A Task by Data Type Taxonomy for Information Visualizations. In Proceedings of the 1996 IEEE Conference on Visual Languages. Boulder, CO, IEEE Computer Society Press, Washington, DC: 336–343. Shneiderman, B. (1997). Designing the User Interface: Strategies for Effective Human– Computer Interaction. Reading, MA, Addison-Wesley. Shneiderman, B., Byrd, D., and Croft, W. B. (1998). “Sorting Out Searching: A User Interface Framework for Text Searches.” Communications of the ACM 41(4): 95–98. Sigletos, G., Paliouras, G., and Karkaletsis, V. (2002). Role Identification from Free Text Using Hidden Markov Models. In Proceedings of the 2nd Hellenic Conference on AI: Methods and Applications of Artificial Intelligence. I. P . Vlahavas and C. D. Spyropoulos, eds. Thessaloniki, Greece, Springer-Verlag, London: 167–178. Silberschatz, A., and Tuzhilin, A. (1996). “What Makes Patterns Interesting in Knowledge Discovery Systems.” IEEE Transactions on Knowledge and Data Engineering 8(6): 970– 974. Silverstein, C., Brin, S., and Motwani, R. (1999). “Beyond Market Baskets: Generalizing Association Rules to Dependence Rules.” Data Mining and Knowledge Discovery 2(1): 39–68. Bibliography 379 Siolas, G., and d’Alche-Buc, F. (2000). Support Vector Machines Based on a Semantic Kernel for Text Categorization. In Proceedings of IJCNN-00, 11th International Joint Conference on Neural Networks. Como, Italy, IEEE Computer Society Press, Los Alamitos, CA: 205–209. Skarmeta, A. G., Bensaid, A., and Tazi, N. (2000). “Data Mining for Text Categorization with Semi-supervised Agglomerative Hierarchical Clustering.” International Journal of Intelli-gent Systems 15(7): 633–646. Slattery, S., and Craven, M. (1998). Combining Statistical and Relational Methods for Learn-ing in Hypertext Domains. In Proceedings of ILP-98, 8th International Conference on Inductive Logic Programming. D. Page, ed. Madison, WI, Springer-Verlag, Heidelberg: 38–52. Slattery, S., and Craven, M. (2000). Discovering Test Set Regularities in Relational Domains. In Proceedings of ICML-00, 17th International Conference on Machine Learning. P . Langley, ed. Stanford, CA, Morgan Kaufmann Publishers, San Francisco: 895–902. Slonim, N., and Tishby, N. (2001). The Power of Word Clusters for Text Classification. In Proceedings of ECIR-01, 23rd European Colloquium on Information Retrieval Research. Darmstadt, Germany Academic Press, British Computer Society, London. Smith, D. (2002). Detecting and Browsing Events in Unstructured Text. In Proceedings of the 25th Annual ACM SIGIR Conference. Tampere, Finland, ACM Press, New York: 73–80. Soderland, S. (1999). “Learning Information Extraction Rules for Semi-Structured and Free Text.” Machine Learning 34(1–3): 233–272. Soderland, S., Etzioni, O., Shaked, T., and Weld, D. S. (2004). The Use of Web-based Statistics to Validate Information Extraction. In Proceedings of the AAAI-2004 Workshop on Adaptive Text Extraction and Mining (ATEM-2004). San Jose, CA, AAAI Press, Menlo Park, CA: 21–27. Soderland, S., Fisher, D., Aseltine, J., and Lehnert, W. (1995). CRYSTAL: Inducing a Con-ceptual Dictionary. In Proceedings of the 14th International Joint Conference on Arti-ficial Intelligence. C. Mellish, ed. Montreal, Canada, Morgan Kaufmann Publishers, San Francisco: 1314–1319. Soh, J. (1998). A Theory of Document Object Locator Combination. Doctoral Dissertation, State University of New York of Buffalo. Sondag, P.-P. (2001). The Semantic Web Paving the Way to the Knowledge Society. In Pro-ceedings of the 27th International Conference on Very Large Databases, (VLDB). Rome, Morgan Kaufmann Publishers, San Francisco: 16. Soon, W. M., Ng, H. T., and Lim, D. C. Y. (2001). “A Machine Learning Approach to Coref-erence Resolution in Noun Phrases.” Computational Linguistics 27(4): 521–544. Soucy, P., and Mineau, G. W. (2001a). A Simple Feature Selection Method for Text Classification. In Proceedings of IJCAI-01, 17th International Joint Conference on Artificial Intelligence. B. Nebel, ed. Seattle, AAAI Press, Menlo Park, CA: 897–902. Soucy, P., and Mineau, G. W. (2001b). A Simple KNN Algorithm for Text Categorization. In Proceedings of ICDM-01, IEEE International Conference on Data Mining. N. Cerone, T. Y. Lin, and X. Wu, eds. San Jose, CA, IEEE Computer Society Press, Los Alamitos, CA: 647–648. Soucy, P., and Mineau, G. W. (2003). Feature Selection Strategies for Text Categorization. In Proceedings of CSCSI-03, 16th Conference of the Canadian Society for Computational Studies of Intelligence. Y. Xiang and B. Chaib-Draa, eds. Halifax: 505–509. Spence, B. (2001). Information Visualization. Harlow, UK, Addison-Wesley. Spenke, M., and Beilken, C. (1999). Visual, Interactive Data Mining with InfoZoom – The Financial Data Set. In Proceedings of the 3rd European Conference on Principles and Prac-tice of Knowledge Discovery in Databases. Prague, Springer Verlag, Berlin. Spitz, L., and Maghbouleh, A. (2000). Text Categorization Using Character Shape Codes. In Proceedings of the 7th SPIE Conference on Document Recognition and Retrieval. San Jose, CA, SPIE, The International Society for Optical Engineering, Bellingham, WA: 174–181. 380 Bibliography Spoerri, A. (1999). “InfoCrystal: A Visual Tool for Information Retrieval.” In Readings in Information Visualization: Using Vision to Think. S. Card, J. Mackinlay, and B. Shneiderman, eds. San Francisco, Morgan Kaufmann Publishers: 140–147. Srikant, R., and Agrawal, R. (1995). Mining Generalized Association Rules. In Proceedings of the 21st International Conference on Very Large Databases. U. Dayal, P . Gray, and S. Nishio, eds. Zurich, Switzerland, Morgan Kaufmann Publishers, San Francisco, CA: 407–419. Srikant, R., and Agrawal, R. (1996). Mining Sequential Patterns: Generalizations and Perfor-mance Improvements. In Proceedings of the 5th Annual Conference on Extending Database Technology. P. Apers, M. Boozeghoub, and G. Gardarin, eds. Avignon, France, Springer-Verlag, Berlin: 3–17. Stamatatos, E., Fakotakis, N., and Kokkinakis, G. (2000). “Automatic Text Categorization in Terms of Genre and Author.” Computational Linguistics 26(4): 471–495. Stapley, B. J., and Benoit, G. (2000). Biobibliometrics: Information Retrieval and Visualization from Co-occurrences of Gene Names in Medline Abstracts. In Proceedings of the Pacific Symposium on Biocomputing. Honolulu, Hawaii, World Scientific Press, Hackensack, NJ: 526–537. Steinbach, M., Karypis, G., and Kumar, V. (2000). A Comparison of Document Clustering Tech-niques. In Proceedings of the 6th ACM SIGKDD International Conference on Knowledge Discovery and Data Mining. Boston, ACM Press, New York. Sun, A., and Lim, E.-P. (2001). Hierarchical Text Classification and Evaluation. In Proceedings of ICDM-01, IEEE International Conference on Data Mining. N. Cercone, T. Lin, and X. Wu, eds. San Jose, CA, IEEE Computer Society Press, Los Alamitos, CA: 521–528. Sun, A., Lim, E.-P., and Ng, W.-K. (2003a). “Hierarchical Text Classification Methods and Their Specification.” In Cooperative Internet Computing. A. T. Chan, S. Chan, H. Y. Leong, and V. T. Y. Ng., eds. Dordrecht, Kluwer Academic Publishers: 236–256. Sun, A., Lim, E.-P., and Ng, W.-K. (2003b). “Performance Measurement Framework for Hier-archical Text Classification.” Journal of the American Society for Information Science and Technology 54(11): 1014–1028. Sun, A., Naing, M., Lim, E., and Lam, W. (2003). Using Support Vector Machine for Terrorism Information Extraction. In Proceedings of the Intelligence and Security Informatics: 1st NSF/NIJ Symposium on Intelligence and Security Informatics. H. Chen, R. Miranda, D. Zeng, C. Demchek, J. Schroeder, and T. Madhusudan, eds. Tucson, AZ, Springer-Verlag, Berlin: 1–12. Taghva, K., Nartker, T. A., Borsack, J., Lumos, S., Condit, A., and Young, R. (2000). Evaluating Text Categorization in the Presence of OCR Errors. In Proceedings of the 8th SPIE Con-ference on Document Recognition and Retrieval. San Jose, CA, SPIE, The International Society for Optical Engineering, Washington, DC: 68–74. Taira, H., and Haruno, M. (1999). Feature Selection in SVM Text Categorization. In Proceed-ings of AAAI-99, 16th Conference of the American Association for Artificial Intelligence. Orlando, FL, AAAI Press, Menlo Park, CA: 480–486. Taira, H., and Haruno, M. (2001). Text Categorization Using Transductive Boosting. In Pro-ceedings of ECML-01, 12th European Conference on Machine Learning. L. D. Raedt and P. A. Flach, eds. Freiburg, Germany, Springer-Verlag, Heidelberg: 454–465. Takamura, H., and Matsumoto, Y. (2001). Feature Space Restructuring for SVMs with Appli-cation to Text Categorization. In Proceedings of EMNLP-01, 6th Conference on Empirical Methods in Natural Language Processing. Pittsburgh, Association for Computational Lin-guistics, Morristown, NJ: 51–57. Tan,A.-H.(2001).PredictiveSelf-OrganizingNetworksforTextCategorization.InProceedings of PAKDD-01, 5th Pacific-Asia Conference on Knowledge Discovery and Data Mining. Hong Kong, Springer-Verlag, Heidelberg: 66–77. Tan, A. (1999). Text Mining: The State of the Art and the Challenges. In Proceedings of the PAKDD’99 Workshop on Knowledge Discovery from Advanced Databases (KDAD’99). Beijing: 71–76. Bibliography 381 Tan, C.-M., Wang, Y.-F., and Lee, C.-D. (2002). “The Use of Bigrams to Enhance Text Cate-gorization.” Information Processing and Management 38(4): 529–546. Taskar, B., Abbeel, P., and Koller, D. (2002). Discriminative Probabilistic Models of Relational Data. In Proceedings of UAI-02, 18th Conference on Uncertainty in Artificial Intelligence. Edmonton, Canada, Morgan Kaufmann Publishers, San Francisco: 485–492. Taskar, B., Segal, E., and Koller, D. (2001). Probabilistic Classification and Clustering in Rela-tional Data. In Proceedings of IJCAI-01, 17th International Joint Conference on Artifi-cial Intelligence. B. Nebel, ed. Seattle, Morgan Kaufmann Publishers, San Francisco: 870– 878. Tauritz, D. R., Kok, J. N., and Sprinkhuizen-Kuyper, I. G. (2000). “Adaptive Information Filtering Using Evolutionary Computation.” Information Sciences 122(2/4): 121–140. Tauritz, D. R., and Sprinkhuizen-Kuyper, I. G. (1999). Adaptive Information Filtering Algo-rithms. In Proceedings of IDA-99, 3rd Symposium on Intelligent Data Analysis. D. J. Wand, J. N. Kok, and M. R. Berthold, eds. Amsterdam, Springer-Verlag, Heidelberg: 513–524. Teahan, W. J. (2000). Text Classification and Segmentation Using Minimum Cross-entropy. In Proceedings of RIAO-00, 6th International Conference “Recherche d’Information Assist´ ee par Ordinateur.” Paris: 943–961. Teytaud, O., and Jalam, R. (2001). Kernel Based Text Categorization. In Proceedings of IJCNN-01, 12th International Joint Conference on Neural Networks. Washington, DC, IEEE Com-puter Society Press, Los Alamitos, CA: 1892–1897. Theeramunkong, T., and Lertnattee, V. (2002). Multi-Dimensional Text Classification. In Pro-ceedings of COLING-02, 19th International Conference on Computational Linguistics. Taipei, Taiwan Association for Computational Linguistics, Morristown, NJ. Thelen, M., and Riloff, E. (2002). A Bootstrapping Method for Learning Semantic Lexicons Using Extraction Pattern Contexts. In Proceedings of the Conference on Empirical Methods in Natural Language Processing (EMNLP 2002). Philadelphia, Association for Computa-tional Linguistics, Morristown, NJ: 214–221. Thomas, J., Cook, K., Crow, V., Hetzler, B., May, R., McQuerry, D., McVeety, R., Miller, N., Nakamura, G., Nowell, L., Whitney, P., and Wong, P. C. (1999). Human Computer Interac-tion with Global Information Spaces: Beyond Data Mining. In Proceedings of the British Computer Society Conference. Bradford, UK, Springer-Verlag, London. Thompson, P. (2001). Automatic Categorization of Case Law. In Proceedings of ICAIL-01, 8th International Conference on Artificial Intelligence and Law. St. Louis, MO, ACM Press, New York: 70–77. Toivonen, H., Klemettinen, M., Ronkainen, P., Hatonen, K., and Mannila, H. (1995). Prun-ing and Grouping Discovered Association Rules. In Workshop Notes: Statistics, Machine Learning and Knowledge Discovery in Databases, ECML-95. N. Lavrac and S. Wrobel, eds. Heraclion, Greece, Springer-Verlag, Berlin: 47–52. Tombros, A., Villa, R., and Rijsbergen, C. J. (2002). “The Effectiveness of Query-Specific Hierarchic Clustering in Information Retrieval.” Information Processing & Management 38(4): 559–582. Tong, R., Winkler, A., and Gage, P . (1992). Classification Trees for Document Routing: A Report on the TREC Experiment. In Proceedings of TREC-1, 1st Text Retrieval Conference. D. K. Harman, ed. Gaithersburg, MD, National Institute of Standards and Technology, Gaithersburg, MD: 209–228. Tong, S., and Koller, D. (2000). Support Vector Machine Active Learning with Applications to Text Classification. In Proceedings of ICML-00, 17th International Conference on Machine Learning. P. Langley, ed. Stanford, CA, Morgan Kaufmann Publishers, San Francisco, CA: 999–1006. Tong, S., and Koller, D. (2001). “Support Vector Machine Active Learning with Applications to Text Classification.” Journal of Machine Learning Research 2: 45–66. Toutanova, K., Chen, F., Popat, K., and Hofmann, T. (2001). Text Classification in a Hier-archical Mixture Model for Small Training Sets. In Proceedings of CIKM-01, 10th ACM 382 Bibliography International Conference on Information and Knowledge Management. H. Paques, L. Liu, and D. Grossman, eds. Atlanta, ACM Press, New York: 105–113. Trastour, D., Bartolini, C., and Preist, C. (2003). “Semantic Web Support for the Business-to-Business E-Commerce Pre-Contractual Lifecycle.” Computer Networks 42(5): 661–673. Tufte, E. (1983). The Visual Display of Quantitative Informaiton. Chelshire, CT, Graphics Press. Tufte, E. (1990). Envisioning Information. Chelshire, CT, Graphics Press. Tufte, E. (1997). Visual Explanations. Cheshire, CT, Graphics Press. Turney, P. (1997). Extraction of Keyphrases from Text: Evaluation of Four Algorithms. Tech-nical Report ERB 1051, National Research Council of Canada, Institute for Information Technology: 1–27. Turney, P. D. (2000). “Learning Algorithms for Keyphrase Extraction.” Information Retrieval 2(4): 303–336. Tzeras, K., and Hartmann, S. (1993). Automatic Indexing Based on Bayesian Inference Net-works. In Proceedings of SIGIR-93, 16th ACM International Conference on Research and Development in Information Retrieval. R. Korfhage, E. M. Rasmussen, and P . Willett, eds. Pittsburgh, ACM Press, New York: 22–34. Tzoukermann, E., Klavans, J., and Jacquemin, C. (1997). Effective Use of Natural Language Processing Techniques for Automatic Conflation of Multi-Word Terms: The Role of Deriva-tional Morphology, Part of Speech Tagging, and Shallow Parsing. In Proceedings of the 20th Annual International ACM SIGIR Conference on Research and Development in Informa-tion Retrieval. Philadelphia, ACM Press, New York: 148–155. Urena-Lopez, L. A., Buenaga, M., and Gomez, J. M. (2001). “Integrating linguistic resources in TC through WSD.” Computers and the Humanities 35(2): 215–230. Uren, V. S., and Addis, T. R. (2002). “How Weak Categorizers Based upon Different Principles Strengthen Performance.” The Computer Journal 45(5): 511–524. Vapnik, V. (1995). The Nature of Statistical Learning Theory. Berlin, Springer-Verlag. Varadarajan, S., Kasravi, K., and Feldman, R. (2002). Text-Mining: Application Development Challenges. In Proceedings of the 22nd SGAI International Conference on Knowledge Based Systems and Applied Artificial Intelligence. Cambridge, UK, Springer-Verlag, Berlin. Vel, O. Y. D., Anderson, A., Corney, M., and Mohay, G. M. (2001). “Mining Email Content for Author Identification Forensics.” SIGMOD Record 30(4): 55–64. Vert, J.-P. (2001). Text Categorization Using Adaptive Context Trees. In Proceedings of CICLING-01, 2nd International Conference on Computational Linguistics and Intelligent Text Processing. A. Gelbukh, ed. Mexico City, Springer-Verlag, Heidelberg: 423–436. Viechnicki, P. (1998). A Performance Evaluation of Automatic Survey Classifiers. In Proceed-ings of ICGI-98, 4th International Colloquium on Grammatical Inference. V. Honavar and G. Slutzki, eds. Ames, IA, Springer-Verlag, Heidelberg: 244–256. Vinokourov, A., and Girolami, M. (2001). Document Classification Employing the Fisher Ker-nel Derived from Probabilistic Hierarchic Corpus Representations. In Proceedings of ECIR-01, 23rd European Colloquium on Information Retrieval Research. Darmstadt, Germany, Springer-Verlag, Berlin: 24–40. Vinokourov, A., and Girolami, M. (2002). “A Probabilistic Framework for the Hierarchic Organisation and Classification of Document Collections.” Journal of Intelligent Informa-tion Systems 18(2/3): 153–172. Wang, H., and Son, N. H. (1999). Text Classification Using Lattice Machine. In Proceed-ings of ISMIS-99, 11th International Symposium on Methodologies for Intelligent Systems. A. Skowron and Z. W. Ras, eds. Warsaw, Springer-Verlag, Heidelberg: 235–243. Wang, J. T. L., Zhang, K., Chang, G., and Shasha, D. (2002). “Finding Approximate Patterns in Undirected Acyclic Graphs.” Pattern Recognition 35(2): 473–483. Wang, K., Zhou, S., and He, Y. (2001). Hierarchical Classification of Real Life Documents. In Proceedings of the 1st SIAM International Conference on Data Mining. Chicago, SIAM Press, Philadelphia. Bibliography 383 Wang, K., Zhou, S., and Liew, S. C. (1999). Building Hierarchical Classifiers Using Class Prox-imity. In Proceedings of VLDB-99, 25th International Conference on Very Large Data Bases. M. P. Atkinson, M. E. Orlowska, P. Valduriez, S. B. Zdonik, and M. L. Brodie, eds. Edinburgh, Morgan Kaufmann Publishers, San Francisco: 363–374. Wang, W., Meng, W., and Yu, C. (2000). Concept Hierarchy Based Text Database Categorization in a Metasearch Engine Environment. In Proceedings of WISE-00, 1st International Con-ference on Web Information Systems Engineering. Hong Kong, IEEE Computer Society Press, Los Alamitos, CA: 283–290. Wang, Y., and Hu, J. (2002). A Machine Learning Based Approach for Table Detection on the Web. In Proceedings of the 11th International World Web Conference. Honolulu, HI, ACM Press, New York: 242–250. Ware, C. (2000). Information Visualization: Perception for Design, San Francisco, Morgan Kaufmann Publishers. Wasserman, S., and Faust, K. (1994). Social Network Analysis: Methods and Applications. Cambridge, UK, Cambridge University Press. Wei, C.-P., and Dong, Y.-X. (2001). A Mining-based Category Evolution Approach to Managing Online Document Categories. In Proceedings of HICSS-01, 34th Annual Hawaii Interna-tional Conference on System Sciences. R. H. Sprague, ed. Maui, HI, IEEE Computer Society Press, Los Alamitos, CA: 7061–7062. Weigend, A. S., Wiener, E. D., and Pedersen, J. O. (1999). “Exploiting Hierarchy in Text Cate-gorization.” Information Retrieval 1(3): 193–216. Weischedel, R., Meteer, M., Schwartz, R., Ramshaw, L., and Palmucci, J. (1993). “Coping with Ambiguity and Unknown Words through Probabilistic Methods.” Computational Linguis-tics 19(2): 361–382. Weiss, S. M., Apte, C., Damerau, F. J., Johnson, D. E., Oles, F. J., Goetz, T., and Hampp, T. (1999). “Maximizing Text-Mining Performance.” IEEE Intelligent Systems 14(4): 63–69. Wermter, S. (2000). “Neural Network Agents for Learning Semantic Text Classification.” Information Retrieval 3(2): 87–103. Wermter, S., Arevian, G., and Panchev, C. (1999). Recurrent Neural Network Learning for Text Routing. In Proceedings of ICANN-99, 9th International Conference on Artificial Neural Networks. Edinburgh, Institution of Electrical Engineers, London, UK: 898–903. Wermter, S., and Hung, C. (2002). Self-Organizing Classification on the Reuters News Cor-pus. In Proceedings of COLING-02, the 19th International Conference on Computational Linguistics. Taipei, Morgan Kaufmann Publishers, San Francisco. Wermter, S., Panchev, C., and Arevian, G. (1999). Hybrid Neural Plausibility Networks for News Agents. In Proceedings of AAAI-99, 16th Conference of the American Association for Artificial Intelligence. Orlando, FL, AAAI Press, Menlo Park, CA: 93–98. Westphal, C., and Bergeron, R. D. (1998). Data Mining Solutions: Methods and Tools for Solving Real-Word Problems. New York, John Wiley and Sons. White, D. R., and Reitz, K. P. (1983). “Graph and Semigroup Homomorphisms on Networks of Relations.” Social Networks 5: 193–234. Wibowo, W., and Williams, H. E. (2002). Simple and Accurate Feature Selection for Hierarchical Categorisation. In Proceedings of the 2002 ACM Symposium on Document Engineering. McLean, VA, ACM Press, New York: 111–118. Wiener, E. D. (1995). A Neural Network Approach to Topic Spotting in Text. Boulder, CO, Department of Computer Science, University of Colorado at Boulder. Wiener, E. D., Pedersen, J. O., and Weigend, A. S. (1995). A Neural Network Approach to Topic Spotting. In Proceedings of SDAIR-95, 4th Annual Symposium on Document Analysis and Information Retrieval. Las Vegas, ISRI, University of Nevada, Las Vegas: 317–332. Wilks, Y. (1997). “Information Extraction as a Core Language Technology.” In M. T. Pazienza, ed. Information Extraction: A Multidisciplinary Approach to an Emerging Information Technology. Lecture Notes in Computer Science 1229: 1–9. 384 Bibliography Williamson, C., and Schneiderman, B. (1992). The Dynamic HomeFinder: Evaluating Dynamic Queries in a Real-Estate Information Exploration System. In Proceedings of the 15th Annual, ACM-SIGIR. N. Belkin, P. Ingwersen, A. Pejtersen, eds. Copenhagen, ACM Press, New York: 338–346. Wills, G. (1999). “NicheWorks’ Interactive Visualization of Very Large Graphs.” Journal of Computational and Graphical Statistics 8(2): 190–212. Wise, J., Thomas, J., Pennock, K., Lantrip, D., Pottier, M., Schur, A., and Crow, V. (1995). Visualizing the Non-Visual: Spatial Analysis and Interaction with Information from Text Documents. In Proceedings of IEEE Information Visualization ’95. Atlanta, GA, IEEE Computer Society Press, Los Alamitos, CA: 51–58. Witten, I. H., Bray, Z., Mahoui, M., and Teahan, W. J. (1999). Text Mining: A New Frontier for Lossless Compression. In Proceedings of IEEE Data Compression Conference. J. Ai. Storer and M. Cohn, eds. Snowbird, UT, IEEE Computer Society Press, Los Alamitos, CA: 198–207. Wong, J. W., Kan, W.-K., and Young, G. H. (1996). “Action: Automatic Classification for Full-Text Documents.” SIGIR Forum 30(1): 26–41. Wong, P. C. (1999). “Visual Data Mining – Guest Editor’s Introduction.” IEEE Computer Graphics and Applications 19(5): 2–12. Wong, P. C., Cowley, W., Foote, H., Jurrus, E., and Thomas, J. (2000). Visualizing Sequential Patterns for Text Mining. In Proceedings of the IEEE Information Visualization Conference (INFOVIS 2000). Salt Lake City, UT, ACM Press, New York: 105–115. Wong, P. C., Whitney, P., and Thomas, J. (1999). Visualizing Association Rules for Text Min-ing. In Proceedings of IEEE Information Visualization (InfoVis ’99). San Francisco, IEEE Computer Society Press, Washington, DC: 120–124. Xu, Z., Yu, K., Tresp, V., Xu, X., and Wang, J. (2003). Representative Sampling for Text Clas-sification Using Support Vector Machines. In Proceedings of ECIR-03, 25th European Con-ference on Information Retrieval. F. Sebastiani, ed. Pisa, Italy, Springer-Verlag, Berlin: 393–407. Xue, D., and Sun, M. (2003). Chinese Text Categorization Based on the Binary Weighting Model with Non-binary Smoothing. In Proceedings of ECIR-03, 25th European Con-ference on Information Retrieval. F. Sebastiani, ed. Pisa, Italy, Springer-Verlag, Berlin: 408–419. Yamazaki, T., and Dagan, I. (1997). Mistake-Driven Learning with Thesaurus for Text Cat-egorization. In Proceedings of NLPRS-97, the Natural Language Processing Pacific Rim Symposium. Phuket, Thailand: 369–374. Yang, C. C., Chen, H., and Hong, K. (2003). “Visualization of Large Category Map for Internet Browsing.” Decision Support Systems 35: 89–102. Yang, H.-C., and Lee, C.-H. (2000a). Automatic Category Generation for Text Documents by Self-Organizing Maps. In Proceedings of IJCNN-00, 11th International Joint Conference on Neural Networks, Volume 3. Como, Italy, IEEE Computer Society Press, Los Alamitos, CA, 3581–3586. Yang, H.-C., and Lee, C.-H. (2000b). Automatic Category Structure Generation and Catego-rization of Chinese Text Documents. In Proceedings of PKDD-00, 4th European Conference on Principles of Data Mining and Knowledge Discovery. D. Zighed, A. Komorowski, and D. Zytkow, eds. Lyon, France, Springer-Verlag, Heidelberg, Germany: 673–678. Yang, T. (2000). Detecting Emerging Contextual Concepts in Textual Collections. M.Sc. thesis, Department of Computer Science, University of Illinois at Urbana-Champaign. Yang, Y. (1994). Expert Network: Effective and Efficient Learning from Human Decisions in Text Categorisation and Retrieval. In Proceedings of SIGIR-94, 17th ACM International Conference on Research and Development in Information Retrieval. W. B. Croft and C. J. v. Rijsbergen, eds. Dublin, Springer-Verlag, Heidelberg: 13–22. Yang, Y. (1995). Noise Reduction in a Statistical Approach to Text Categorization. In Proceed-ings of SIGIR-95, 18th ACM International Conference on Research and Development in Bibliography 385 Information Retrieval. E. A. Fox, P . Ingwersen, and R. Fidel, eds. Seattle, ACM Press, New York: 256–263. Yang, Y. (1996). An Evaluation of Statistical Approaches to MEDLINE Indexing. In Proceed-ings of AMIA-96, Fall Symposium of the American Medical Informatics Association. J. J. Cimino, ed. Washington, DC, Hanley and Belfus, Philadelphia: 358–362. Yang, Y. (1999). “An Evaluation of Statistical Approaches to Text Categorization.” Informa-tion Retrieval 1(1/2): 69–90. Yang, Y. (2001). A Study on Thresholding Strategies for Text Categorization. In Proceedings of SIGIR-01, 24th ACM International Conference on Research and Development in Infor-mation Retrieval. W. B. Croft, D. J. Harper, D. H. Kroft, and J. Zobel, eds. New Orleans, ACM Press, New York: 137–145. Yang, Y., Ault, T., and Pierce, T. (2000). Combining Multiple Learning Strategies for Effective Cross-Validation. In Proceedings of ICML-00, 17th International Conference on Machine Learning. P. Langley, ed. Stanford, CA, Morgan Kaufmann Publishers, San Francisco: 1167– 1182. Yang, Y., Ault, T., Pierce, T., and Lattimer, C. W. (2000). Improving Text Categorization Meth-ods for Event Tracking. In Proceedings of SIGIR-00, 23rd ACM International Conference on Research and Development in Information Retrieval. N. J. Belkin, P . Ingwersen, and M.-K. Leong, eds. Athens, Greece, ACM Press, New York: 65–72. Yang, Y., and Chute, C. G. (1993). An Application of Least Squares Fit Mapping to Text Information Retrieval. In Proceedings of SIGIR-93, 16th ACM International Conference on Research and Development in Information Retrieval. R. Korthage, E. Rasmussen, and P. Willett, eds. Pittsburgh, ACM Press, New York: 281–290. Yang, Y., and Chute, C. G. (1994). “An Example-Based Mapping Method for Text Catego-rization and Retrieval.” ACM Transactions on Information Systems 12(3): 252–277. Yang, Y., and Liu, X. (1999). A Re-examination of Text Categorization Methods. In Proceed-ings of SIGIR-99, 22nd ACM International Conference on Research and Development in Information Retrieval. M. Hearst, F. Gey, and R. Tong, eds. Berkeley, CA, ACM Press, New York: 42–49. Yang, Y., and Pedersen, J. O. (1997). A Comparative Study on Feature Selection in Text Categorization. In Proceedings of ICML-97, 14th International Conference on Machine Learning. D. H. Fisher. Nashville, TN, Morgan Kaufmann Publishers, San Francisco: 412– 420. Yang, Y., Slattery, S., and Ghani, R. (2002). “A Study of Approaches to Hypertext Catego-rization.” Journal of Intelligent Information Systems 18(2/3): 219–241. Yang, Y., and Wilbur, J. W. (1996a). “An Analysis of Statistical Term Strength and Its Use in the Indexing and Retrieval of Molecular Biology Texts.” Computers in Biology and Medicine 26(3): 209–222. Yang, Y., and Wilbur, J. W. (1996b). “Using Corpus Statistics to Remove Redundant Words in Text Categorization.” Journal of the American Society for Information Science 47(5): 357–369. Yang, Y., Zhang, J., and Kisiel, B. (2003). A Scalability Analysis of Classifiers in Text Cate-gorization. In Proceedings of SIGIR-03, 26th ACM International Conference on Research and Development in Information Retrieval. J. Callan, G. Cormack, C. Clarke, D. Hawking, and A. Smeaton, eds. Toronto, ACM Press, New York: 96–103. Yao, D., Wang, J., Lu, Y., Noble, N., Sun, H., Zhu, X., Lin, N., Payan, D., Li, M., and Qu, K. (2004). Pathway Finder: Paving the Way Towards Automatic Pathway Extraction. In Pro-ceedings of the 2nd Asian Bioinformatics Conference. Dunedin, New Zealand, Australian Computer Society, Darlinghurst, Australia: 53–62. Yavuz, T., and Guvenir, H. A. (1998). Application of k-nearest Neighbor on Feature Projections Classifier to Text Categorization. In Proceedings of ISCIS-98, 13th International Symposium on Computer and Information Sciences. U. Gudukbay, T. Dayar, A. Gorsoy, and E. Gelenbe, eds. Ankara, Turkey, IOS Press, Amsterdam: 135–142. 386 Bibliography Ye, N. (2003). The Handbook of Data Mining. Mahwah, NJ, Lawrence Erlbaum Associates. Yee, K.-P., Fisher, D., Dhamija, R., and Hearst, M. (2001). Animated Exploration of Dynamic Graphs with Radial Layout. In Proceedings of IEEE Symposium on Information Visual-ization (InfoVis 2001). San Diego, CA, IEEE Computer Society Press, Washington, DC: 43–50. Yeh, A., and Hirschman, L. (2002). “Background and Overview for KDD Cup 2002 Task 1: Information Extraction from Biomedical Articles.” KDD Explorarions 4(2): 87–89. Yi, J., and Sundaresan, N. (2000). A Classifier for Semi-Structured Documents. In Proceedings of KDD-00, 6th ACM International Conference on Knowledge Discovery and Data Mining. Boston, ACM Press, New York: 340–344. Yoon, S., Henschen, L. J., Park, E., and Makki, S. (1999). Using Domain Knowledge in Knowl-edge Discovery. In Proceedings of the ACM Conference CIKM ’99. Kansas City, MO, ACM Press, New York: 243–250. Yu, E. S., and Liddy, E. D. (1999). Feature Selection in Text Categorization Using the Bald-win Effect Networks. In Proceedings of IJCNN-99, 10th International Joint Conference on Neural Networks. Washington, DC, IEEE Computer Society Press, Los Alamitos, CA: 2924–2927. Yu, K. L., and Lam, W. (1998). A New On-Line Learning Algorithm for Adaptive Text Filter-ing. In Proceedings of CIKM-98, 7th ACM International Conference on Information and Knowledge Management. G. Gardarin, J. French, N. Pissinou, K. Makki, and L. Bouganim, eds. Bethesda, MD, ACM Press, New York: 156–160. Yumi, J. (2000). Graphical User Interface and Visualization Techniques for Detection of Emerg-ingConcepts.M.S.thesis,DepartmentofComputerScience,UniversityofIllinoisatUrbana-Champaign. Zaiane, O. R., and Antonie, M.-L. (2002). Classifying Text Documents by Associating Terms with Text Categories. In Proceedings of the 13th Australasian Conference on Database Technologies. Melbourne, Australia, ACM Press, New York: 215–222. Zamir, O., and Etzioni, O. (1999). “Grouper: A Dynamic Clustering Interface to Web Search Results.” Computer Networks. 31(11–16): 1361–1374. Zaragoza, H., Massih-Reza, A., and Gallinari, P . (1999). A Dynamic Probability Model for Closed-Query Text Mining Tasks. Draft submission to KDD ’99. Zelikovitz, S., and Hirsh, H. (2000). Improving Short Text Classification Using Unlabeled Back-ground Knowledge. In Proceedings of ICML-00, 17th International Conference on Machine Learning. P. Langley, ed. Stanford, CA, Morgan Kaufmann Publishers, San Francisco: 1183– 1190. Zelikovitz, S., and Hirsh, H. (2001). Using LSI for Text Classification in the Presence of Back-ground Text. In Proceedings of CIKM-01, 10th ACM International Conference on Informa-tion and Knowledge Management. H. Paques, L. Liu, and D. Grossman, eds. Atlanta, ACM Press, New York: 113–118. Zhang, D., and Lee, W. S. (2003). Question Classification Using Support Vector Machines. In Proceedings of SIGIR-03, 26th ACM International Conference on Research and Devel-opment in Information Retrieval. J. Callan, G. Cormack, C. Clarke, D. Hawking, and A. Smeaton, eds. Toronto, ACM Press, New York: 26–32. Zhang, J., Jin, R., Yang, Y., and Hauptmann, A. (2003). Modified Logistic Regression: An Approximation to SVM and Its Applications in Large-Scale Text Categorization. In Pro-ceedings of ICML-03, 20th International Conference on Machine Learning. Washington, DC, Morgan Kaufmann Publishers, San Francisco: 888–895. Zhang, J., and Yang, Y. (2003). Robustness of Regularized Linear Classification Methods in Text Categorization. In Proceedings of SIGIR-03, 26th ACM International Conference on Research and Development in Information Retrieval. J. Collan, G. Cormack, C. Clarke, D. Hawking, and A. Smeaton, eds. Toronto, ACM Press, New York: 190–197. Zhang,K.,Wang,J.T.L.,andShasha,D.(1995).“OntheEditingDistanceBetweenUndirected Acyclic Graphs.” International Journal of Foundations of Computer Science 7(1): 43–57. Bibliography 387 Zhang, T., and Oles, F. J. (2001). “Text Categorization Based on Regularized Linear Classifi-cation Methods.” Information Retrieval 4(1): 5–31. Zhao, Y., and Karypis, G. (2002). Criterion Functions for Document Clustering: Experiments and Analysis. Technical Report, TR 01–40. Minneapolis, Department of Computer Science, University of Minnesota. Zhdanova,A.V.,andShishkin,D.V.(2002).ClassificationofEmailQueriesbyTopic:Approach Based on Hierarchically Structured Subject Domain. In Proceedings of IDEAL-02, 3rd Inter-national Conference on Intelligent Data Engineering and Automated Learning. H. Yin, N. Allinson, R. Freeman, J. Keane, and S. Hubbard, eds. Manchester, UK, Springer-Verlag, Heidelberg: 99–104. Zhong, S., and Ghosh, J (2003). “A Comparative Study of Generative Models for Document Clustering.” Knowledge and Information Systems: An International Journal 8: 374–384. Zhou, M., and Cui, Y. (2004). “GeneInfoViz: Constructing and Visualizing Gene Relation Networks.” In Silico Biology 4(3): 323–333. Zhou, S., Fan, Y., Hua, J., Yu, F., and Hu, Y. (2000). Hierachically Classifying Chinese Web Documents without Dictionary Support and Segmentation Procedure. In Proceedings of WAIM-00, 1st International Conference on Web-Age Information Management. Shanghai, China, Springer-Verlag, Heidelberg: 215–226. Zhou, S., and Guan, J. (2002a). An Approach to Improve Text Classification Efficiency. In Pro-ceedingsofADBIS-02,6thEast-EuropeanConferenceonAdvancesinDatabasesandInfor-mation Systems. Y. M., and P. Navrat, eds. Bratislava, Slovakia, Springer-Verlag, Heidelberg: 65–79. Zhou, S., and Guan, J. (2002b). Chinese Documents Classification Based on N-Grams. In Proceedings of CICLING-02, 3rd International Conference on Computational Linguistics and Intelligent Text Processing. A. F. Gelbukh, ed. Mexico City, Springer-Verlag, Heidelberg: 405–414. Zhou, S., Ling, T. W., Guan, J., Hu, J., and Zhou, A. (2003). Fast Text Classification: A Training-Corpus Pruning Based Approach. In Proceedings of DASFAA-03, 8th IEEE International ConferenceonDatabaseAdvancedSystemsforAdvancedApplication.Kyoto,Japan,IEEE Computer Society Press, Los Alamitos, CA: 127–136. Index ACE-1, 164 ACE-2 annotations, 164, 165 evaluation, 164–166 acquisition bottleneck, 64 activity networks, 198 AdaBoost algorithm, 77, 78, 120 agglomerative algorithms, 85 Agrawal, C.C., 9, 24 AI tasks, 64 algorithm(s). See also Apriori algorithm; Borders algorithm (LP)2, 120 3-D rendering, 219 AdaBoost, 77, 78, 120 agglomerative, 85 association generating, 26 BASILISK, 173–174 bootstrapping, 171 brute force, 85 Buckshot, 86, 88 BWI, 119–120 classic graph analysis, 260 clustering, 85–88 convex optimization, 72 cores and, 258–259 covering, 121 CRF, 121 Delta, 36, 41 documents structured by, 57 EM, 78, 90–91 EM-based mixture resolving, 85, 87 episode based, 41 evaluating, 121 FACT’s, 49 force-directed graph layout, 245, 246 forward–backward, 134, 141 frequent concept set, 24, 25 FUP, 36, 41 FUP2, 36, 41 general graphs fast, 248 HAC, 85, 87–88 HMM, 121 Hobbs, 112 human-language processing, 60 IE, 98, 119 incremental, 30, 36 inductive, 119, 121 ISO-DATA, 86 KK, 247 K-means, 85, 86, 88 knowledge discovery, 2, 17, 193 layout, 245, 246 learning, 68 MEMM, 121 metabootstrapping, 170 mixture-resolving, 87 ML, 70 Naive, 112 optimization, 246 O-Tree, 124–125 pattern-discovery, 1, 5 preprocessing methodologies, 57 probabilistic extraction, 121 Ripper, 74, 298 salience, 114 search, 36, 178, 236 sequential patterns mining, 30 shuffling, 85 SOMs generated, 213, 216–217 spring-embedder, 245 SVM, 76–77 tasks for, 58 text mining, 5, 8 Viterbi, 133–134, 138, 141 WHISK, 119 389 390 Index alone, maximal associations and, 27 ambiguities part-of-speech, 58 analysis banking, 280 critical path, 198, 235 data, 64 dependency, 61 domain, 105 Industry Analyzer corporate, 292–294 lexical, 105, 106, 107 linguistic, 109 morphological, 59, 105 patent, 295, 298 sentence, 109 syntactic, 105 textual, 146–152 time-based, 30 trend, 9, 30–31, 41, 299, 303 anaphora NLP , 118 one-, 111 ordinal, 111 pronominal, 110 anaphora resolution, 109–119 approaches to, 109, 112, 113–114, 116, 117–119 annotations ACE-2, 164, 165 corpus, 109, 166 answer-sets, 1, 23 antecedent closest, 118 most confident, 118 nonpronominal preceding, 118 application. See also Document Explorer application; GeneWays; knowledge discovery in text language, application; Patent Researcher; text mining applications area, 202 business intelligence, 280 creating custom, 285 horizontal, 295 KDD, 13 patent analysis, 295 TC, 64, 65–66 text mining, xi, 8 apposition, 110 Apriori algorithm, 24, 36 associations generated with, 37 textual application of, 39 architects. See system, architects architecture considerations, 192–194 FACT’s, 46–47 functional, 13, 192 GeneWays’, 308–310 IE, 104–109 Industry Analyzer, 281–288 open-ended, 116 preprocessing, 58 system, 46–47, 186 text mining system’s, 13–18 articulation points, 260 assignment function F, 66 association(s), 19, 25–26. See also ephemeral associations; maximal associations algorithm for generating, 26 Apriori algorithm generation of, 37 browsing tools for, 181 clustering, 181 concept, 9 concept sets and, 25 constraints, 181–183 discovery of, 24, 45, 46 displaying/exploring, 180–182 ephemeral, 30, 32 generating, 40 graphs, 198–200 left-hand side (LHS) of, 26, 45, 181 M-, 28 market basket type, 24, 25, 39 overwhelming, 181 partial ordering of, 202 query, 45, 46 right-hand side (RHS) of, 26, 45, 200 association rules, 24, 25, 27, 182 circle graphs and, 208 definitions for, 40 discovering, 26–27 modeling, 210 search results, 36 for sets, 200 attribute(s) extracting, 96 relationship rules, 42 auditing environment, 94 automata. See finite-state automata automated categorization, 67 AutoSlog-TS system, 166–168 average concept distribution, 22, 23 proportion, 22 background constraints, 186 states, 149 background knowledge, 8, 42, 274 access to, 8 concept synonymy and, 45 constraints crafted by, 45 creating, 16 document collections and, 45 domains and, 8 FACT’s exploitation of, 46 Index 391 forms of, 42 generalized v. specialized, 274–276 GeneWays’ sources of, 308 Industry Analyzer implementation of, 281 integrating, 45 large amounts of, 275 leveraging, 8 maintenance requirement, 276 pattern abundance limited by, 45 polysemy and, 45 preservation of, 16 sources of, 275 specialized, 275 text mining systems and, 8, 16, 42, 44 back-propagation, 75 backward variable, 133, 141 Bacon, Kevin. See Kevin Bacon game bagging, 77–78 bag-of-words model, 68, 89 banking analysts, 280 baseline classifiers, 80 BASILISK algorithm. See Bootstrapping Approach to Semantic Lexicon Induction using Semantic Knowledge Baum–Welsh reestimation formulas, 135, 136, 147, 151 Bayesian approximation, 120 Bayesian logistic regression (BLR), 71–72 benchmark collections, 79–80 best-first clustering, 118 betweeness centrality, 252–253, 256 bigrams, 5 binary categorization, 67 matrix, 243 predicates, 16 relation, 242, 243 SVM classifiers, 76 tree, 73 binary-valued trigger function, 139 bins, 66 BioGen Idec Inc., 283, 291 biological pathways information, 274 text mining, biological pathways information, 274 biotech industry, 288–289 BioWisdom company, 275 BioWorld, xi, 41, 281 BioWorld Online, 294 block modeling, 262–266, 270 hijacker network and, 266–270 pajek, 268 BLR. See Bayesian logistic regression Blum, A., 172 Bonacich, P., 254 Boolean constraints, 256 Boolean expressions, 179 boosted wrapper induction (BWI) algorithm, 119–120 boosting, 77–78 classifiers, 77 indicators, 115 boosting classifiers, 77 boosting indicators, 115 bootstrapping algorithm, 171 categorization, 174–175 IE and, 166 introduction to, 166–168 meta-, 169 multi-class, 174 mutual, 168 problems, 172 single-category, 174 Bootstrapping Approach to Semantic Lexicon Induction using Semantic Knowledge (BASILISK) algorithm, 173–174 border sets, 36 Borders algorithm, 36 benefits of, 37 notational elements of, 37 Property 1 of, 37 Property 2 of, 37 Stage 1 of, 37 Stage 2 of, 38 Borgatti, S.P ., 262 Brown Corpus tag set, 60 Brown, R.D., 228 browsers, 177. See also Title Browser character-based, 191 distribution, 238 Document Explorer, 238 interactive distribution, 238 browsing. See also scatter/gather browsing method defined, 177–185 distributions, 179 hierarchical, 23 interface, 179, 189 interfaces, 276 methods, 179 navigational, 10 pattern, 14 result-sets for, 11 software for, 177 support operations, 203 text mining system, 10 tools, 181 tree, 15 user, 10, 13 brute force algorithm, 85 search, 9 Buckshot algorithm, 86, 88 392 Index business intelligence, 279, 280 sector, 280 BWI. See boosted wrapper induction C4.5 procedure, 73 Cardie, Claire, 118 Carnegie group, 70 CART procedure, 73 categorization. See also text categorization attributes, 45 automated, 67 binary, 67 bootstrapping, 174–175 category-pivoted, 67 document-pivoted, 67 hard, 67 hierarchical Web page, 66 manual-based, 6, 12 methodologies of, 6 multilabel, 67 online, 67 patent analysis and, 298 POS tag set, 60 POS word, 60 preprocessing methodology, 57 problems, 82 relationship based, 45 rule-based, 6 single-label, 67 soft (ranking), 67 systems, 91 categorization status value (CSV), 67 category connecting maps, 211–212, 239 category domain knowledge, 42 CDM-based methodologies, 7, 12 centrality, 249 betweeness, 252–253, 256 closeness, 251 definitions used for, 249 degree, 249–251, 255 eigenvector, 253–254 measures of, 249 natural language text, 1 power, 254–255 centralization, network, 255–256 centroids, medoids v., 90 character(s), 5, 8 classes, representations of, 5 character-level regular expressions, chi-square measures, 69, 200 chunking, NP, 154 CIA World Factbook, 46, 48, 50 circle graphs, 190, 208–213, 286 click-sensitive jumping points of, 210 controls of, 211 data modeling by, 213 interactivity, 190 mouse-overs and, 210 multiple, 212–213 nodes, 292 style elements of, 210 usefulness of, 208 visualization, 292 classes character, equivalence, 201–202 classification line, 153 schemes, 131 classifier(s) baseline, 80 binary SVM, 76 boosting, 77 building, 66 common, 68 comparing, 80 continuous, 67 decision rule, 73–74 decision tree, 72–73 example-based, 75–76 ith, 77 k different, 77 kNN, 75 machine learning, 70 ME, 153 NB, 71, 78, 90–91 probabilistic, 71, 78 Rocchio, 74–75 stateless ME, 153 symbolic, 72 text, 76, 79–80 training, 79 classifier committees, 77–78 ClearForest Corporation, 294 ClearForest Text Analytics Suite, 294, 296 ClearResearch, 231 closeness centrality, 251 cluster(s) chain-like, 87 complete-link, 87, 88 gathering, 83 k, 88 labels, 91 postprocessing of, 86 scattering, 83 single link, 87, 88 cluster hypothesis, 82 cluster-based retrieval, 84 clustering. See also nearest neighbor clustering algorithms, 85–88 associations, 181 best-first, 118 defined, 70, 82 disjoint, 75 Index 393 documents grouped by, 83 flat (partial), 85 good, 84 hard, 85 hierarchical, 83 optimization, 85 problem, 84–85, 89 quality function, 84, 92 query specific, 83 soft, 85 tasks, 82–84 term, 69 text, xi, 89, 91–92 tools, 11, 184–185 unsupervised, 185 usefulness of, 82 users and, 83 of vertices, 264 CO. See coreference task CogNIAC, 113 collections, benchmark, 79–80 color assigning of, 279 coding, 45, 289 GUI palette of, 279 Columbia University, 307, 310–311 column-orthonormal, 90 combination graphs, 212–213 command-line query interpreters, 10 committees building, 77 classifier, 77–78 components bi-connected, 260 presentation layer, 14 strong, 260 weak, 260 computational linguistics, 1, 3 concept(s), 5–8 associations of, 9 context, 33 co-occurrence, 9, 23 DIAL language, distribution, 21 distribution distance, 29 documents, 23 extraction, 7 features, 12 graphs, 202–204 guards, 328–329 hierarchy node, 20 identifiers, 6, 197 interdocument association, 9 keywords v., 12 link analysis, 226 names, 326 occurrence, 19 output, 156 patterns, 10 proportion, 22, 29 proportion distance, 29 proportion distribution, 21, 22 representations, 7 selection, 19, 20 sentences, 321 subsets of, 201 synonymy, 45 concept hierarchies, 43 editing tools, 184 maintaining, 183 navigation/exploration by, 182 node, 20 roles of, 182 taxonomy editors and, 183–184 concept set(s), 22 associations and, 25 cosine similarity of, 201 display of, 196 graphs, 196, 197 concision, 191 conditional models, 140 conditional probability, 71, 142 computing, 143 conditional random fields (CRFs), 142–144 algorithm, 121 chunk tagger, 155 chunker, 154 development of, 153 formalism, 153 linear chain, 142 part-of-speech tagging with, 153–154 problems relating to, 143 shallow parsing with, 154–155 textual analysis and, 153–155 training of, 144 conditions, 120 confidence, 25 M-, 27, 28 threshold, 181 constants. See also string, constants Boolean, 328 rule, 327–328 constituency grammars, 60–61 constraint(s), 42 accessing, 185–186 association, 181–183 background, 186 background knowledge crafting of, 45 comparison, 328 controls, 191 FACT’s exploitation of, 46 functions, 139 leveraging, 276 logic of, 186 parameters, 45 Patent Researcher’s, 298–299 394 Index constraint(s) (cont.) quality, 186 query, 278 redundancy, 186 refinement, 11, 14, 19–41, 191, 284–285, 298–299 search, 178, 203 syntactical, 186 types of, 186 CONSTRUE system, 70, 73 contained matches, 101 context. See also temporal context relationships concept, 33 DIAL language, focus with, 191 phrase, 33 relationships, 32, 33 context equivalence, 202 context graphs, 30, 32, 33–35 components of, 33 defined, 33 context-dependent probabilities, 149, 152 continuous real-valued functions, 74 control elements, 191 controlled vocabulary, 65 convex optimization algorithms, 72 co-occurrence concept, 9, 23 frequency of, 24 relationships, 12 core algorithm for finding, 258–259 vertices of, 258 core text mining operations, 14, 19–41, 284–285 Patent Researcher and, 298–299 coreference function–value, 111 part–whole, 112 proper names, 110 resolution, 109, 112 coreference task (CO), 99 coreferring phrases, 109 corporate finance, 273 business intelligence performed in, 279 text mining applications, 284 corpus annotated, 109, 166 MUC-4, 170 cosine similarity, 90, 200, 201 Costner, Kevin, 248 cotraining, 78, 172 cover equivalence, 202 covering algorithm, 121 σ-cover sets, 24. See also singleton, σ-covers FACT’s generation of, 49 CRFs. See conditional random fields critical path, 234 analysis, 198, 235 diagrams, 234 graphs, 235 Croft, W.B., 76 cross-referencing, 6 CSV. See categorization status value Cui, Y., 197, 198 CUtenet, 310 Cutting, D.R., 92 cycle, 243 graph, 248–249 transmission/emission, 131 DAGs. See directed acyclic graphs Daisy Analysis, 225 Daisy Chart, 225 DAML, 275 DARPA, 96 data abstraction, 91 analyzing complex, 64 Apriori algorithm and textual, 39 clustering, 14, 88–92 color-coding of, 45 comparing, 29 currency, 36 discovering trends in textual, 30 dynamically updated, 36 exploration, 184–185 GeneWays’ sources of, 308 identifying trends in, 9 inter-document’s relationships with, 2 modeling, 213 Patent Researcher, 297 patterns of textual, 40 preparing, 57 scrubbing/normalization of, 1 sparseness, 136–137, 148 textual, 88–92, 189 thresholds for incremental, 39 unlabeled, 78 unstructured, 194 visualization of, 217 data mining analysis derived from, 10 border sets in, 36 pattern-discovery algorithms, 1 preprocessing routines, 1 presentation-layer elements, 1 text mining v., 1, 11 visualization tools, 1 database GenBank, 308 MedLine, 11, 78, 275 OLDMEDLINE, 12 relational, 4 Swiss-Prot, 308 decision rule classifier, 73–74 tree (DT) classifiers, 72–73 decomposition, singular value, 89–91 definite noun phrases, 117 Index 395 definiteness, 115 degree centrality, 249–251, 255 Delta algorithms, 36, 41 demonstrative noun phrases, 117 dense network, 244 dependency analysis, 61 grammars, 61 detection. See deviation, detection deviation detection, 10, 13, 30, 32 sources of, 32, 41 diagrams critical path, 234 fisheye, 227–231 DIAL language, 283, 297 code, 320–322 concept, 317 concept declaration, 317 context, 321 Discovery Module, 319 engines, 317 examples, 329–330, 331–332, 333–336 information extraction, 318–319 module, 318–319 plug-in, 321 scanner properties, 320 searches, 321 sentence concept, 321 text pattern, 317–318 text tokenization, 320 dictionaries, 106 dimension reduction, 69, 89 LSI with, 89 SVD and, 90 dimensionality document collection’s high, 215 document reduction, 89 feature, 4, 12 direct ephemeral association, 31 directed acyclic graphs (DAGS), 43, 61, 197 activity networks and, 198 ontological application of, 197 visualization techniques based on, 198 directed networks, 249, 260 disambiguation, 156 disconnected spring graphs, 234 discovery association rules, 26–27 of associations, 24, 45, 46 ephemeral associations, 10, 13 frequent concept sets, 24, 39, 40 methods, 24 Discovery Module, 319 disjoint clusters, 75 dissimilarity matrix, 247 distance, referential, 116 distribution(s), 9, 19–23 average, 23 average concept, 22 Boolean expressions generation of, 179 browsing, 179 comparing specific, 23 concept, 21 concept co-occurrence, 23 concept proportion, 21, 22 conditional probability, 142 interestingness and, 29–30 patterns based on, 29, 32, 301 queries, 205, 292 text mining systems and, 22, 29 topic, 31 divide-and-conquer strategy, 58 DNF rules, 73 document(s) algorithm’s structuring of, 57 association of, 9 bag-of-words, 89 binary, 73 binary vector of features as, 4 bins, 66 clustering of, 83 collections of, 4 concept-labeled, 23 correlating data across, 2 co-training, 78 data relationships and, 2 defined, 2–4 dimensionality reduction of, 89 document collection’s adding of, 36 features of, 4–8, 12 field extraction, 59, 146–148 free format, 3 good, 66 IE representation of, 95 irrelevant, 66 managing, 64 manually labeling, 78 meaning, 59 native feature space of, 5 natural language, 4 news feed, 32 O-Tree, 125 patent, 304 portraying meanings of, 5 proportion of set of, 20 prototypical, 3 quantities for analyzing, 20 relevant, 66 representations of, 4, 5, 6, 7, 58, 68 retrieval of, 179 scope of, 3 semistructured, 3–4 sorting, 65–66 sources of, 59 tagging, 94 task structuring of, 57 test sets of, 79 396 Index document(s) (cont.) text, 3 typographical elements of, 3 unlabeled, 78 weakly structured, 3–4 document collection(s) analyzing, 30 application area of, 202 background knowledge and, 45 defined, 2–3, 4 documents added to, 36 dynamic, 2 high-dimensionality, 215 Industry Analyzer, 281 processed, 15 PubMed as real-world, 2 scattering, 83 static, 2 subcollection, 19, 30 Document Explorer application, 18 browsers, 238 development of, 235 knowledge discovery toolkit of, 238 modules, 236 pattern searches by, 236 term hierarchy editor, 237–238 visualization tools of, 236, 238 domain(s), 8 analysis, 105 background knowledge and, 8 customization, 276 defined, 42 domain hierarchy with, 43 hierarchy, 43 knowledge, 42, 58, 59 lexicons, 44 ontology, 42–43, 44, 51 scope of, 8, 42 semistructured, 121 single application, 12 terminology preference, 116 text mining system’s, 16 DT classifier. See decision tree (DT) classifiers Eades, P., 231, 246 edges, 33, 35 Eigenvector centrality, 253–254 EM. See expectation maximization e-mail, 3 energy minimization, 248 engineering knowledge, 70, 155 engines DIAL, 317 GeneWays’ parsing, 308 IE, 95 pronoun resolution, 113 query, 16 search, 82, 199 entities choosing query, 275 content-bearing, 94 equivalence between, 260–261 extracting, 96, 149, 150, 156, 164 hierarchy, 95 IE process and relevant, 95 links between, 242 multiple, 101 real world, 307 ephemeral associations, 30 defined, 31 direct, 31 discovery, 10, 13 examples, 31 inverse, 32 episodes, algorithms based on, 41 equivalence classes, 201–202 context, 202 cover, 202 entity, 260–261 first, 202 regular, 261 structural, 261 Erd¨ os number, 248 Erd¨ os, Paul, 248 error(s) false negative, 74 false positive, 74 matrix, 268 precision, 66 recall, 66 non-Euclidean plane, 217 evaluation workbench, 116 event example of, 94 extraction, 96 Everett, M.G., 262 exact matches, 101 example-based classifiers, 75–76 expectation maximization (EM), 78, 87, 90–91 mixture resolving algorithm, 85, 87 Explora system, 18 exploration concept hierarchy, 182 external ontologies, 100 extraction. See also field extraction; Nymble algorithms, 121 attribute, 96 concept, 7 DIAL examples of, 329–330, 331 domain-independent v. domain-dependent, 98 entities, 96, 149, 150, 156, 164 event, 96 fact, 96 feature, 69, 84, 283 grammars, 138 HMM field, 146–148 Index 397 information, 2, 11, 61–62, 119 literature, 190 Los Alamos II-type concept, 7 relationship, 156, 164–166 ST, 99 structural, 122 TEG, 164 term, 6, 12, 95, 283 text, 96 TR, 99 visual information, 122 F assignment function, 66 FACT. See Finding Associations in Collections of Text facts, 94 extracting, 96 FAQ file, 153 FBI Web site, 244 feature(s). See also native feature space concept-level, 12 dimensionality, 4, 12 document, 4–8, 12 extraction, 69, 84, 283 linguistic, 100 markable, 117 orthographic, 100 relevance, 69 selection, 68–69, 84, 100 semantic, 100 space, 85 sparsity, 4 state, 142 synthetic, 69 transition, 142 Feldman, R., 46 field extraction, 59, 146 location, 149 speaker, 149, 152 files PDF, 3 word-processing, 3 filters, 14 fisheye view, 229–230 information, 14 personalized ad, 66 redundancy, 201–202 simple specification, 185–186 text, 65–66 finance. See corporate finance Finding Associations in Collections of Text (FACT), 18, 46–51 algorithm of, 49 background knowledge exploitation by, 46 constraints exploited by, 46 σ-cover sets generated by, 49 designers of, 50 implementing, 47–49 performance results of, 50–51 query language, 46 system architecture of, 46–47 Findwhat, 199 finite-state automata, 156 first equivalence, 202 fisheye diagrams, 227–231 fisheye interface, 230 fisheye views distorting, 228, 230 effectiveness of, 230–231 filtering, 229–230 fixed thresholding, 67 focus with context, 191 force-directed graph layout algorithms, 245, 246 formats, converting, 59 formulas Baum–Welsh reestimation, 135, 136 network centralization, 255 forward variable, 132, 141 forward–backward algorithm, 134, 141 forward–backward procedure, 132–133 FR method, 246–248 fractal approaches, 229 fragments, text, 109 frames hierarchy, 95 structured objects as, 95 Freitag, D., 146 frequent concept sets, 9, 23–24 algorithm for generating, 24, 25 Apriori-style, 36 discovery methods for, 24, 39, 40 generating, 24 identifying, 25 natural language in, 24 near, 25 σ-cover sets as, 24 σ-covers as, 24 front-end, de-coupled/loosely coupled, 193 Fruchterman, T., 232, 248. See also FR method function(s) binary-valued trigger, 139 clustering quality, 84 constraint, 139 continuous real-valued, 74 similarity, 84, 200–201 trigger-constraint, 153 functional architecture, 13, 192 functionality GeneWays’, 308–310 Industry Analyzer system, 290 Patent Researcher’s, 296–300 types of, 10 function–value coreference, 111 FUP algorithm, 36, 41 FUP2 algorithm, 36, 41 Furnas, G., 228 fuzzy search, 184 398 Index game. See Kevin Bacon game Gaussian priors, 71 Gelbukh, A., 9 GenBank database, 308 Gene Ontology Consortium, 43, 275 Gene Ontology TM knowledge base, 43, 51, 197 generalized iterative scaling, 140, 144 generative models, 140 generative process, 131 generic noun phrases (GN), 171. See also noun phrases; phrases, coreferring; pronoun resolution engine; proper noun phrases GeneWays, xi, 307 architecture/functionality of, 308–310 background knowledge sources, 308 core mining operations of, 310 core mission of, 307 CUtenet of, 310 data sources, 308 GUI, 310 implementation/usage, 310 Industry Analyzer comparison with, 307 Parsing Engine, 308 Patent Researcher comparison with, 307 preprocessing operations, 308–310 presentation layer elements, 310 Relationship Learner module, 309 specialized nuances of, 308 Synonym/Homonym Resolver, 309 GENomics Information Extraction System (GENIES), 308 giveness, 115 GN. See generic noun phrases Google, 200. See also search grammars. See also stochastic context-free grammars ambiguity of, 137 canonical, 137 constituency, 60–61 dependency, 61 extraction, 138 nonstochastic, 137 TEG, 157, 158 graph(s). See also circle graphs; line graphs; singleton, vertex analysis algorithm, 260 combination, 212–213 concept, 202–204 concept set, 196, 197 connected spring, 234 connection, 200 context, 30, 32, 33–35 critical path, 235 cycles in, 248–249 disconnected spring, 234 drawing large, 248 fast algorithm, 248 general undirected, 247 histogram-based trend, 307 multivertex, 198 node-and-edge, 227 paths, 248–249 simple concept, 195–205, 239, 286, 294 simple concept association, 198–200 spring embedded network, 231 temporal context, 30, 32, 35 theory, 242 trend, 30, 32, 35, 239 graphical user interface (GUI), 14, 46. See also interface developers, 226 display modalities of, 178 easy to use controls of, 177 GeneWays, 310 histogrammatic representations situated in, 205 palette of colors, 279 Patent Researcher’s, 299–300 queries, 284 text mining application’s, 177 grouping, perceptual, 59, 123–124 GUI. See graphical user interface HAC algorithm. See hierarchical agglomerative clustering algorithm Hadany, R., 248 Harel, D., 248 heuristics. See syntactic heuristics hidden Markov model algorithm, 121 hidden Markov models (HMMs), 131–137. See also maximum entropy Markov model assumptions of, 132 characteristics, 147 classes of states, 147 classic, 151 defined, 131–137 document field extraction by, 146–148 entity extractor, 164 field extraction, 146–148 fully connected, 153 MEMM outperformed by, 153 Nymble and, 150 optimal, 148 POS tagging and, 156 problems related to, 132 single-state, 148 textual analysis and, 146–152 topology, 147, 148 training, 135–136 hierarchical agglomerative clustering (HAC) algorithm, 85, 87–88 hierarchy. See also trees, hierarchical clustering, 83 concept, 43 domain, 43 editor, 237–238 Index 399 entity/frame, 95 internal nodes of, 23 IS A-type, 282 object’s, 123 ontology, 46 shrinkage, 148 Web page, 66 hijacker(s) network, 266–270 9/11, 244 histogram(s), 205–207, 286 distribution pattern demonstration in, 301 graphs, 307 interactivity of, 207 link analysis and, 226 HMMs. See hidden Markov models Hobbs algorithm, 112 holonymy, 112 homonymy, 69 Honkela, T., 216 Hooke’s Law, 246 HSOM. See hyperbolic self-organizing map HTML Web pages, 3, 66 WYSIWYG editor, 3 human(s) knowledge discovery view by, 13, 17, 177 language processing, 60 hybrid system introduction to, 155–156 TEG as, 156 hybrid tools, 221–224 hyperbolic non-Euclidean plane, 217 hyperbolic self-organizing map (HSOM), 225 hyperbolic trees, 217–219 hypernyms, 184 hyperplanes, 76 hypertext, 66 hyponyms, 184 hypothesis cluster, 82 weak, 77 IBM, 199 ID3 procedure, 73 identical sets, 111 identifiers, 6, 197 IdentiFinder. See Nymble IE. See information extraction Imielinski, T., 24 immediate reference, 115 incremental algorithms, 30, 36 incremental update schemes, 38 indefinite noun phrases, 117 indexing, 65, 83. See also latent semantic indexing indicating verbs, 115 inductive algorithm, 119, 121 inductive rule learning, 73 Industry Analyzer system, 280 architecture/functionality of, 281–288 background knowledge implementation, 281 ClearForest Text Analytic’s Suite and, 297 color coding, 289 core mining operations, 284–285 corporate analysis and, 292–294 document collection, 281 ease-of-use features of, 285 event-type query, 290 functionality, 290 GeneWays comparison with, 307 graphical menus, 288 implementation of, 282 merger activity with, 288 preprocessing operations, 282–284 presentation layer, 285–288 refinement constraints, 284–285 scope of, 280 search results, 291 term extraction, 283 visualization tools, 292 inferencing, 108–109 influence, 249 information age, x biological pathways, 274 control of, 229 exploring, 292–294 filtering, 14 flow, 105–109 gain, 69 retrieval, 1, 2, 62, 82 information extraction (IE), 2, 11, 61–62, 122 algorithms, 98, 119 architecture, 104–109 auditing environment, 94 benchmarks, 155 bootstrapping approach to, 166 DIAL language, 318–319 documents represented by, 95 engine, 95 evaluation, 100, 101 evolution of, 96–101 examples, 101, 102, 104 hybrid statistical, 155–166 information flow in, 105–109 knowledge-based, 155–166 MEMM for, 152–153 relevant entities and, 95 SCFG rules for, 155–166 schematic view of, 95 specialized dictionaries for, 106 statistical/rule-based, 156 structured, 122 symbolic rules of, 119 usefulness of, 94, 104 400 Index input–output paradigm, 13 inquiry, analytical, 273 Inxight Software, 217 interactivity circle graph, 190 concept graph, 202–204 facilitating, 195 histogram, 207 user, 179, 189 interestingness defining, 29 distributions and, 29–30 knowledge discovery and, 40 measures of, 9, 179 proportions and, 29–30 interface. See also graphical user interface browsing, 179, 189, 276 fish-eye, 230 Patent Researcher’s Taxonomy Chooser, 297 query language, 10 visualization, 191 WEBSOM’s, 215 interpreters, 10 Iossifov, I., 310 IS A-type hierarchies, 282 ISO-DATA algorithm, 86 ith classifier, 77 Jones, R., 168, 169 k clusters, 88 k different classifiers, 77 Kamada, T., 232, 248. See also KK method Kawai, S., 232, 248. See also KK method KDTL. See knowledge discovery in text language Kevin Bacon game, 248 keywords assigning, 65 concepts v., 12 KK method, 246, 247 K-means algorithm, 85, 86, 88 k-nearest neighbor (kNN) classifier, 75 kNN classifier. See k-nearest neighbor (kNN) classifier knowledge base, 17 category domain, 42 distillation, 14 domain specific, 58, 59 engineering, 64, 70, 155 knowledge discovery algorithms, 2, 17, 193 distribution-type pattern’s, 32 Document Explorer toolkit for, 238 human-centered view of, 13, 17, 177 interestingness and, 40 overabundance problems of, 179 Patent Researcher’s, 299 patterns of, 14 problem-sets, 194 supplementing, 31 knowledge discovery in text language (KDTL), 18, 52 application, 18, 236 queries, 52–54, 55, 236 Kohonen maps, Kohonen networks. See self-organizing maps Kohonen, T., 213 Koren, Y., 248 Kuhn–Tucker theorem, 139, 140 label bias problem, 142 sequence, 144 Lafferty, J., 153 Lagrange multipliers, 139 language. See also natural language processing; sublanguages DIAL, 283 FACTS’s query, 46 natural, 4 processing human, 60 query, 10, 14, 51–52, 177 soft mark-up, 3 Laplace priors, 71, 72 smoothing, 136 Lappin, S., 113 Larkey, L.S., 76 latent semantic indexing (LSI), 69, 89 dimension reduction with, 89 layout algorithms, 245, 246 force-directed, 246 network, 244–248, 275 learner, weak, 77 learning. See also machine learning algorithms, 68 inductive rule, 73 rules, 74 supervised, 70 Leass, H.J., 113 left-hand sides (LHSs), 26, 45, 181 lemmas, 60 lemmatization, 6, 283 Lent, B., 9 lexical analysis, 105, 106, 107 lexical reiteration, 115 lexicons, 8, 42, 44 domain, 44 external, 283 GN, 171 PNP , 171, 172 semantic, 169, 170 Index 401 LHSs. See left-hand sides libraries. See also National Library of Medicine graphing software, 207 integration/customization of, 207 life sciences, 273 business sector, 280 research, LINDI project, 18 line graphs, 207–208 link analysis and, 226 multi-, 208 as prototyping tools, 207 linear least-square fit (LLSF), 74 linguistic(s) computational, 1, 3 features, 100 processing, 283 sentence analysis, 109 link(s) detection, 230–231 between entities, 242 operations, 203–204 link analysis, 225 concepts, 226 histograms and, 226 line graphs and, 226 software packages for, 271–272 literature, extraction of, 190 LLSF. See linear least-square fit Los Alamos II-type concept extraction, 7 loss ratio parameter, 74 Louis-Dreyfus, Julia, 248 (LP)2 algorithm, 120 LSI. See latent semantic indexing Lycos, 199 machine learning (ML), 64, 70–78, 166 algorithms, 70 anaphora resolution and, 117–119 classifier, 70 techniques, 70–71 MacKechnie, Keith, 248 mapping, structural, 125–127 maps, category connecting, 211–212, 239 marginal probability, 71 markables, 117 market basket associations, 24, 25, 39 problems, 25, 39 M-association, 28 matches contained, 101 exact, 101 overlapped, 101 matrix binary, 243 dissimilarity, 247 error, 268 transmission, 143 maximal associations. See also M-association; M-confidence; M-frequent; M-support alone and, 27 M-factor of, 40 rules, 27, 40 maximal entropy (ME), 131, 138–140, 153 maximal entropy Markov model (MEMM), 140–141 algorithm, 121 comparing, 153 HMM and, 153 information extraction and, 152–153 training, 141 maximum likelihood estimation, 91 McCallum, A., 146 M-confidence, 27, 28 ME. See maximal entropy measures centrality, 249 chi-square, 69, 200 interestingness, 9, 179 network centralization, 255 performance, 79 similarity, 85 uniformity, 152 MedLEE medical NLP system, 309 MedLine, 11, 78, 275 medoids, centroids v., 90 MEMM. See maximal entropy Markov model merger activity, 288–289 meronymy, 112 MeSH, 275 MESH thesaurus, 65 Message Understanding Conferences (MUC), 96–101 metabootstrapping, 169, 170 methodologies. See also preprocessing methodologies categorization, 6 CDM-based, 7, 12 information extraction, 11 term-extraction, 6, 12, 95, 283 text mining, 9 M-factor, 40 M-frequent, 28 Microsoft, 199 middle-tier, 193 Miller, James, 225 minconf thresholds, 26, 40 minimal spanning tree (MST), 88 minimization, 248 minsup thresholds, 26, 40 Mitchell, T.M., 172 mixture-resolving algorithms, 87 ML. See machine learning 402 Index models block, 262–266, 270 conditional v. generative, 140 data, 213 Document Explorer, 236 ME, 138–140 probabilistic, 131 module. See also Discovery Module DIAL language, 318–319 Document Explorer application, 236 GeneWays Relationship Learner, 309 Montes-y-Gomez, M., 8–10 morphological analysis, 59, 105 most probable label sequence, 144 MSN, 199 MST. See minimal spanning tree M-support, 27, 28 MUC. See Message Understanding Conferences MUC-4 corpus, 170 MUC-7 Corpus Evaluation, 164 multilabel categorization, 67 Murphy, Joseph, 307 multivertex graphs, 198 Mutton, Paul, 231 Naive algorithm, 112 Naive Bayes (NB) classifiers, 71, 78, 90–91 named entity recognition (NER), 96, 164 names concept, 326 identifying proper, 106–107 proper, 97 thesaurus, 325 wordclass, 324–325 NASA space thesaurus, 65 National Cancer Institute (NCI), 282 National Cancer Institute (NCI) Metathesaurus, 294 National Center for Biotechnology Information (NCBI), 11 National Institute of Health (NIH), 11 National Library of Medicine (NLM), 2, 11, 275, 282 native feature space, 12 natural language processing (NLP), 4 anaphoric, 118 components, 60 elements, 117 field extraction and, 146 frequent sets in, 24 general purpose, 58, 59–61 MedLEE medical, 309 techniques, 58 natural language text, 1 navigation, concept hierarchy, 182 NB classifiers. See Naive Bayes classifiers NCBI. See National Center for Biotechnology Information NCI. See National Cancer Institute near frequent concept sets, 25 nearest neighbor clustering, 88 negative borders, 36 NER. See named entity recognition NetMap, 209 NetMiner software, 272 network(s). See also spring embedding, network graphs activity, 198 automatic layout of, 244–248, 275 centralization, 249, 255–256 clique, 244 complex, 75 dense, 244 directed, 249, 260 formulas, 255 hijacker, 266–270 layered display of, 259 layout, 244–248, 275 neural, 75 nonlinear, 75 partitioning of, 257–270 pattern matching in, 270 patterns, 242 self-loops in, 244 social, 242 sparse, 244 two-mode, 244 undirected, 260 weakness of, 260 neural networks (NN), 75 Ng, Vincent, 118 ngrams, 156 construction of, 157 featureset declaration, 163 parent, 161 restriction clause in, 163 shrinkage, 163 statistics for, 159 token generation by, 159, 161 NIH. See National Institute of Health 9/11 hijacker example, 244 NLM. See National Library of Medicine NLP . See natural language processing NN. See neural networks nodes circle graph, 292 concept hierarchy, 20 extracted literature as, 190 internal, 23, 180 radiating, 227 relatedness of, 227 sibling, 22 tree structure of, 182, 195 vertices as, 33 nominals, predicate, 110–111 Index 403 noun groups, 107–108 noun phrases, 97, 113, 114, 115. See also generic noun phrases; phrases, coreferring; pronoun resolution engine; proper noun phrases definite, 117 demonstrative, 117 indefinite, 117 NPs base, 154 chunking of, 154 Nymble, 149–152 experimental evaluation of, 152 HMM topology of, 150 tokenization and, 150 object tree. See O-Tree objects hierarchical structure among, 123 structured, 95 OLDMEDLINE, 12 one-anaphora, 111 online categorization, 67 ontologies, 8, 42, 43 commercial, 45 creation, 244–248, 275 DAG, 197 domain, 42–43, 44, 51 external, 100 Gene Ontology Consortium, 275 hierarchical forms generated by, 46 open-ended architecture, 116 operations browsing-support, 203 link, 203–204 preprocessing, 202–204 presentation, 204 search, 203 optimization, 6 algorithm, 246 clustering, 85 problems, 84, 139 orderings, partial, 201–202 ordinal anaphora, 111 orthographic features, 100 O-Tree(s), 123 algorithm, 124–125 documents structured as, 125 output concepts, 156 overabundance pattern, 9, 189 problem of, 9, 179 overlapped matches, 101 OWL, 275 pairs tag–tag, 153 tag–word, 153 pajek block modeling of, 268 scope of, 271 shrinking option of, 258 Web site, 271 Palka system, 166 paradigm, input–output, 13 parameter loss ratio, 74 maximum likelihood, 91 search, 180 parameterization, 11, 178 parse tree, 137 parsing problem, 138 shallow, 61, 107–108, 154–155, 167 syntactic, 59, 60–61 XML, 116 partial orderings, 201–202 partitioning, 257–270 part-of-speech (POS) tagging, 59, 60, 113, 114, 156, 285. See also Brown Corpus tag set; lemmas categories of, 60 conditional random fields with, 153–154 external, 163 HMM-based, 156 part–whole coreference, 112 patent(s) analysis, 295, 298 documents, 304 managers, 307 search, 274, 295 strategy, 295 trends in issued, 303–307 Patent Researcher, 295 application usage scenarios, 300 architecture/functionality of, 296–300 bundled terms, 304 constraints supported by, 298–299 core text mining operations and, 298–299 data for, 297 DIAL language and, 297 GeneWays comparison with, 307 GUI of, 299–300 implementation of, 296, 297 knowledge discovery support by, 299 preprocessing operations, 297–298 presentation layer, 299–300 queries, 299 refinement constraints, 298–299 Taxonomy Chooser interface, 297 trend analysis capabilities of, 303 visualization tools, 299–300, 301 path(s), 243, 248–249 404 Index pattern(s) browsing, 14 collocation, 115 concept, 10 concept occurrence, 19 DIAL language text, 317–318 discovery, 5 distribution-based, 29, 32, 301 Document Explorer search, 236 elements, 323–327 interesting, 29–30 knowledge discovery, 14 matching, 270, 322–323 network, 242 operators, 323 overabundance, 9, 189 RlogF, 173 search for, 8–10 sequential, 41 text, 317–318, 323 text mining, 1, 19 textual data, 40 unsuspected, 191 user’s knowledge of, 36 PCFGs, disambiguation ability of, 156 PDF files, 3 percentage thresholds, 38, 39 perceptual grouping, 59, 123–124 performance measures, 79 Perl script, 164 PersonAffiliation relation, 162–163 Phillips, W., 171 phrases, coreferring, 109. See also generic noun phrases; noun phrases; proper noun phrases PieSky software, 231 plan, hyperbolic non-Euclidean, 217 pleonastic, 110 PNP . See proper noun phrases polysemy, 45, 69 POS tags. See part-of-speech tagging power centrality, 254–255 predicate(s) nominals, 110–111 unary/binary, 16 preference collocation pattern, 115 domain terminology, 116 section heading, 115 prefix lengthening of, 149 splitting of, 149 preprocessing methodologies, xi, 2, 57 algorithms, 57 architecture, 58 categorizing, 57 GeneWays’, 308–310 Patent Researcher’s, 297–298 task oriented, 13, 57 varieties of, 57 presentation layer, 185–186 components, 14 elements of, 1, 14 importance of, 10–11 Industry Analyzer, 285–288 interface, 186 Patent Researcher, 299–300 text mining system’s, 10 utilities, 193 presentation operations, 204 prestige, types of, 249 Princeton University, 43 priors Gaussian, 71 Laplace, 71, 72 probabilistic classifiers, 71, 78 probabilistic extraction algorithm, 121 probabilistic generative process, 131 probabilistic models, 131 probability conditional, 71, 142, 143 context-dependent, 149, 152 emission, 132, 150 marginal, 71 transition, 132, 141, 150, 151 problem(s) bootstrapping, 172 categorization, 82 clustering, 84–85 CRF, 143 data sparseness, 136–137, 148 definition, 122–123 document sorting, 65 HMM’s, 132 label bias, 142 optimization, 84, 139 overabundance, 9, 179 parsing, 138 sets, 194 tasks dependent on, 58, 59, 61–62 TC, 69 text categorization, 69, 79 unsolved, 58 procedure C4.5, 73 CART, 73 forward-backward, 132–133 ID3, 73 process, probabilistic generative, 131 processing linguistic, 283 random, 131 themes related to, 131 profiles, user, 11 pronominal anaphora, 110 Index 405 pronominal resolution, 112 pronoun resolution engine, 113 proper names, 97 coreference, 110 identification, 106–107 proper noun phrases (PNP), 171, 172. See also generic noun phrases; noun phrases; phrases, coreferring; pronoun resolution engine proportional thresholding, 67 proportions, 19 concept, 22, 29 interestingness and, 29–30 protocols RDF, 194 XML-oriented, 194 prototyping, 207 proximity, 191 pruning, 73, 178 PubMed, 2, 11, 275 scope of, 2 Web site, 12 pull-down boxes, 276, 278 quality constraints, 186 query association-discovery, 45, 46 canned, 278 choosing entities for, 275 clustering and, 83 constraints, 278 construction of, 278 distribution-type, 205, 292 engines, 16 expressions, 45 GUI driven, 284 Industry Analyzer event-type, 290 interpreters, 10 KDTL, 52–54, 55, 236 languages, 10, 14, 51–52, 177 lists of, 276 parameterization of, 11, 178 Patent Researcher, 299 preset, 274, 276 proportion-type, 205 result sets and, 45 support for, 179 tables, 23 templates for, 278 trend analysis, 304 user’s, 13 query languages, 10, 14 accessing, 177, 186–187 FACT’s, 46 interfaces, 10 parameterization, 178 text mining, 51–52 RDF protocols, 194 redundancy constraints, 186 filters, 201–202 Reed Elsevier company, 275 reference, immediate, 115 referential distance, 116 refinement constraints, 11, 14, 19–41, 191, 284–285, 298–299 techniques, 14–17, 186 regression methods, 74 Reingold, E., 231, 246. See also FR method reiteration, lexical, 115 relatedness node, 227 semantic, 69 relationship(s) building, 108 categorization by, 45 context, 32, 33 co-occurrence, 12 data, 2 extraction, 156, 164–166 meaningful, content-bearing, 94 meronymy/holonymy, 112 PersonAffiliation, 162–163 rule attributes, 42 tagged, 164 temporal context, 35 term, 275 relativity, 191 representations 2-D, 219 bag-of-words document, 89 binary document, 73 character’s, 5 concept-level, 7 document’s, 4, 5, 6, 7, 58, 68 term-based, 6, 7 word-level, 6 research deviation detection, 32 enhancing speed/efficiency of, 2 life sciences, text mining and patent, 273 resolution. See also anaphora resolution; coreference, resolution coreference, 109, 112 pronominal, 112 result sets, 45 retrieval cluster-based, 84 document, 179 information, 1, 2, 62, 82 Reuters newswire, 4, 31, 70 RHS. See right-hand side right brain stimulation, 191 406 Index right-hand side (RHS), 26, 45, 200 Riloff, Ellen, 166, 168, 169, 171 Ripper algorithm, 74, 298 RlogF pattern, 173 Rocchio classifier, 74–75 ROLE relation, 165 rules, 165–166 rule(s) associations, 24, 25, 27, 182 averaging, 265 constraints, 327–328 DNF, 73 learners, 74 maximal association, 27, 40 ROLE, 165–166 tagging, 120 TEG syntax, 156 Rzhetzky, A., 310 salience algorithm, 114 Sarkar, M., 228 scaling. See generalized iterative scaling scanner properties, 320, 327 scatter/gather browsing method, 83 scattering, 83 scenario templates (STs), 99 SCFG. See stochastic context-free grammars schemes classification, 131 TF-IDF, 68 weighting, 68 search. See also Google algorithms, 36, 178, 236 association rules, 36 brute force, 9 constraints, 178, 203 DIAL language, 321 Document Explorer patterns of, 236 engines, 82, 199 expanding parameters of, 180 fuzzy, 184 improving, 82–83 Industry Analyzer, 291 leveraging knowledge from previous, 36 operations, 203 parameters, 180 patent, 274, 295 for patterns, 8–10 precision, 83 task conditioning, 178 for trends, 8–10 selection. See feature, selection self-organizing maps (SOMs). See also WEBSOM algorithm generation of, 213, 216–217 multiple-lattice, 219 self-loops, 244 semantic features, 100 semantic lexicon, 169, 170 semantic relatedness, 69 sentences concept, 321 linguistic analysis of, 109 sequential patterns, 30, 41 sets. See also concept sets answer, 1, 23 association rules involving, 200 Brown Corpus tag, 60 frequent and near frequent, 19, 25 frequent concept, 9, 23–24 identical, 111 POS tag, 60 result, 45 σ-cover, 24 test, 67, 100 test document, 79 training, 68, 79, 100, 118 validation, 68, 75 shallow parsing, 61, 107–108, 154–155, 167 shrinkage, 136 defined, 136 hierarchies, 148 ngram, 163 technique, 148, 161 shuffling algorithms, 85 sibling node, 22 similarity cosine, 90, 200, 201 function, 84, 200–201 measures, 85 simple concept association graphs, 200–201 simple concept graphs, 195–205, 239, 286, 294 simulated annealing, 247 single-label categorization, 67 singleton σ-covers, 24 vertex, 198 singular value decomposition (SVD), 89–90, 91 smoothing, 136 social networks, 242 soft mark-up language, 3 software browsing, 177 corporate intelligence, 273 Insight, 217 libraries, 207 link analysis, 271–272 NetMiner, 272 PieSky, 231 protein interaction analysis, 273 search engine, 199 StarTree Studio, 217 SOMs. See self-organizing maps Soon’s string match feature (SOON STR), 118 sparse network, 244 sparseness, 72 data, 136–137, 148 training data, 136–137 Index 407 sparsity. See feature, sparsity spring embedding, 231 See also networks algorithms, 245 network graphs, 231 StarTree Studio software, 217 states background, 149 HMM’s classes of, 147 stimulation, right brain, 191 stochastic context-free grammars (SCFG), 131, 137 defined, 138 information extraction and, 155–166 using, 137–138 stop words, 68 strategy divide-and-conquer, 58 patent, 295 string, 138 constants, 324 strong components, 260 structural equivalence, 261 structural mapping, 125–127 structured objects, 95 STs. See scenario templates sublanguages, 138 subtasks, 123–124 suffix lengthening of, 149 splitting of, 149 sum-of-squares, 29 supervised learning, 70 support, 24, 25, 249 query, 179 thresholds, 181 vectors, 76 support vector machines (SVM), 76–77, 78 76–77, 78 SVD. See singular value decomposition SVM. See support vector machines Swiss-Prot database, 308 symbolic classifiers, 72 symbols, terminal, 156 SYNDICATE system, 18 Synonym/Homonym resolver, 309 synonymy, 45, 69 syntactic analysis, 105 syntactic heuristics, 171, 172 syntactic parsing, 59, 60–61 syntactical constraints, 186 syntax, TEG rulebook, 156 system(s). See also AutoSlog-TS system; CONSTRUE system; Explora system; GENomics Information Extraction System; hybrid system; MedLEE medical NLP system; Palka system; SYNDICATE system; text mining systems; TEXTRISE system architects, 17 architecture, 46–47, 186 thresholds defined by, 229 table(s) of contents, 83 joins, 1 query, 23 tagging. See also part-of-speech tagging chunk, 155 documents, 94 MUC style, 100 POS, 283 rules, 120 tag–tag pairs, 153 tag–word pairs, 153 target string lengthening of, 149 splitting of, 149 task(s). See also coreference task; subtasks; template element tasks; template relationship task; visual information extraction task AI, 64 algorithms, 58 clustering, 82–84 documents structured by, 57 entity extraction, 150, 156 NE, 96 preprocessing by, 13, 57 problem dependent, 58, 59, 61–62 search, 178 text categorization, 66 TR, 99 taxonomies, 8, 42, 180 classic, 185 concept, 195 editors, 183–184 maintaining, 183 roles of, 182 Taxonomy Chooser interface, 297 TC. See text categorization TEG. See trainable extraction grammar template element (TE) tasks, 98 template relationship (TR) task, 99 templates, 123, 127–128, 278 temporal context graphs, 30, 32, 35 temporal context relationships, 32, 35 temporal selection, 35 term(s), 5–6, 8 candidate, 6 clustering, 69 extraction, 6, 12, 95, 283 hierarchy editor, 237–238 lemmatized, 6 Patent Researcher’s bundled, 304 relationships, 275 tokenized, 6 terminal symbols, 156 408 Index term-level representations, 7 termlists, 156 TEs. See template element tasks test sets, 67, 100 text(s) classifiers, 76, 79–80 clustering, xi, 89, 91–92 comprehension, 95 elements extracted from, 96 extraction, 96 filtering, 65–66 fragments, 109 natural language, 1 pattern, 317, 318, 323 tokenization, 320 text analysis, 146–152 clustering tasks in, 82–84 CRF’s application to, 153–155 text categorization (TC), 58, 61–62, 64 applications, 64, 65–66 approaches to, 64 automated, 64 experiments, 79 knowledge engineering approach to, 70 machine learning, 70–78 NN and, 75 problem, 69, 79 stop words removed from, 68 task, 66 text mining. See also preprocessing methodologies algorithms, 5, 8 analysis tools for, 1 applications, xi, 8 background knowledge and, 8 biological pathways, corporate finance and, 273 data mining, v, 1, 11 defined, x, 1, 13 essential task of, 4 goals of, 5 GUIs for, 177 human-centric, 189 IE and, 11 input–output paradigm for, 13 inspiration/direction of, 1 introductions to, 11 KDD applications, 13 life sciences and, 273 methodologies, 9 patent research and, 273 pattern overabundance limitation, 9 pattern-discovery algorithms, 1, 5 patterns, 19 preprocessing operations, 1, 2, 4, 7, 8, 13–14, 57 presentation layer elements, 1, 14 query languages for, 51–52 techniques exploited by, 1 visualization tools, 1, 194 text mining applications, 8 corporate finance-oriented, 284 Document Explorer, 18 Explora system, 18 FACT, 18 GUIs of, 177 horizontal, 307 KDT, 18 LINDI project, 18 SYNDICATE system, 18 TEXTRISE system, 18 text mining systems. See also core text mining operations abstract level of, 13 architecture of, 13–18 background knowledge and, 8, 16, 42, 44 baseline distribution for, 22 concept proportions and, 29 content based browsing with, 10 customized profiles with, 11 designers of, 221, 275 distributions and, 29 domain specific data sources, 16 early, 30 empowering users of, 10 front-ends of, 10, 11 graphical elements of, 11 hypothetical, 19 incremental update schemes for, 38 practical approach of, 30 presentation layer of, 10 query engines of, 16 refinement constraints, 11 refinement techniques, 14–17 state-of-the-art, 10, 194 TEXTRISE system, 18 textual data, 88–92, 189, 195 TF-IDF schemes, 68 thematic hierarchical thesaurus, 65 themes, processing, 131 thesaurus MESH, 65 names, 325 NASA aerospace, 65 thematic hierarchical, 65 three dimensional (3-D) effects, 219–221 See also representations, 2-D algorithms, 219 challenges of, 220 disadvantages of, 221 impact of, 221 opportunities offered by, 220 thresholding fixed, 67 proportional, 67 thresholds confidence, 181 data, 39 Index 409 minconf, 26, 40 minsup, 26, 40 percentage, 38, 39 support, 181 system-defined, 229 user-defined, 229 time-based analysis, 30 Tipster, 96–101 Title Browser, 301 token(s) elements, 327 features of, 150, 161 ngram generation of, 159, 161 UNK , 152 unknown, 152, 161 tokenization, 59, 60, 104, 106, 107 DIAL language text, 320 linguistic processing and, 283 Nymble and, 150 tokenizer, external, 161 tools analysis, 1 browsing, 181 clustering, 11, 184–185 Document Explorer visualization, 236 editing, 184 graphical, 189 hybrid, 221–224 hyperbolic tree, 217 line graphs as prototyping, 207 prototyping, 207 visualization, 1, 10, 14, 192, 194, 227–228, 294 1, 10, 14, 192, 194, 226–227, 292 TR. See template relationship task trainable extraction grammar (TEG), 155, 156 accuracy of, 165 experimental evaluation of, 164 extractor, 164 grammar, 157, 158 as hybrid system, 156 rulebook syntax, 156 training, 158–161 training classifiers, 79 CRF’s, 144 examples, 117 HMM, 135–136 MEMM, 141 sets, 68, 79, 100, 118 TEG, 158–161 transmission emission cycle, 131 matrix, 143 tree(s). See also minimal spanning tree binary, 73 browsing, 15 hierarchical, 42, 195 hyperbolic, 217–219 node structure of, 182 parse, 137 pruning, 73, 178 trend(s) analysis, 9, 30–31, 41, 299, 303 graphs, 30, 32, 35, 239 patent, 303–307 search for, 8–10 trigger-constraint functions, 153 trigrams, 5 tuple dimension, 5 two-mode network, 244 UCINET, 271–272 UMLS Metathaurus. See Unified Medical Language System Metathesaurus unary predicates, 16 undirected networks, 260 Unified Medical Language System (UMLS) Metathesaurus, 282 uniformity, 152 United States Patent and Trademark Office, 297, 303 unknown tokens, 152, 161 UNK tokens, 152 user(s) browsing by, 10, 13 clustering guided by, 83 customizing profiles of, 11 empowering, 10 groups, 194 interactivity of, 179, 189 M-support and, 28 pattern knowledge of, 36 querying by, 13 thresholds defined by, 229 values identified by, 26 user-identified values minconf, 26 minsup, 26 utilities, 193 validation sets, 68, 75 variable backward, 133 forward, 132, 141 vector(s) feature, 68 formats, 7 global feature, 142 original document, 90 space model, 85 support, 76 weight, 142 verbs, indicating, 115 vertices, 33, 258, 264 VIE task. See visual information extraction task 410 Index visual information extraction (VIE) task, 122, 123, 128 visual techniques, 194–225 visualization. See also circle graphs 3-D, 219 approaches, 189, 191, 279 assigning colors to, 279 capabilities, 274 circle graph, 292 DAG techniques of, 198 data, 217 Document Explorer tools for, 236, 238 hyperbolic tree, 217 interface, 191 link analysis and, 225 Patent Researcher’s tools for, 299–300, 301 specialized approach to, 225 tools, 1, 10, 14, 192, 194, 226–227, 292 user interactivity and, 189 Viterbi algorithm, 133–134, 138, 141 vocabulary, controlled, 65, 275 walk, 243 Washington Post Web site, 244 weak components, 260 weak hypothesis, 77 weak learner, 77 Web pages hierarchical categorization of, 66 HTML and, 3 hypertextual nature of, 66 Web site(s) FBI, 244 Kevin Bacon game, 248 pajek, 271 PubMed, 12 UCINET, 271–272 U.S. Patent and Trademark Office, 297, 303 Washington Post, 244 WEBSOM, 213–215. See also self-organizing maps advantages of, 215 zoomable interface of, 215 weight vector, 142 weighted linear combination, 77 weights binary, 68 giving, 68 WHISK algorithm, 119 word stems, 4 wordclass names, 324–325 word-level representations, 6 WordNet, 43, 44, 50, 51, 112 word-processing files, 3 words, 5–6, 8 identifying single, 6 POS tag categorization of, 60 scanning, 106 stop, 68 syntactic role of, 58 synthetic features v. naturally occurring, 69 workbench, evaluation, 116 WYSIWYG HTML editor, 3 Xerox PARC, 217 XML parsing, 116 protocol, 194 Yang, Y., 76 Zhou, M., 197, 198 zoning module. See tokenization zoomability, 191
189341
https://www.youtube.com/watch?v=XcRADNicd7o
Organic Structures | Chapter 2 - Organic Chemistry (2nd Edition) Last Minute Lecture 3020 subscribers 2 likes Description 108 views Posted: 1 Aug 2025 Chapter 2 of Organic Chemistry (Second Edition) by Clayden, Greeves, and Warren introduces the foundational principles for reading, interpreting, and drawing organic molecular structures. The chapter opens by exploring the structural complexity of organic molecules through examples like palytoxin, highlighting the immense variety and size these compounds can attain. It teaches that organic molecules are built upon hydrocarbon frameworks—chains, branches, and rings of carbon atoms—which act as skeletal backbones supporting functionally important groups such as alcohols, amines, acids, and others. These functional groups, rather than the carbon skeletons, determine chemical behavior, biological activity, solubility, acidity, and reactivity. Through clear visual examples, students are taught conventions for organic structural diagrams, including guidelines to omit hydrogen atoms bonded to carbon, simplify carbon atom notations, and draw carbon chains in zig-zag formations. Three-dimensional stereochemical representations using wedges and dashes are also introduced to show spatial arrangements, particularly important in molecules with chirality. The chapter continues with an extensive classification of hydrocarbon frameworks, detailing naming conventions and symbolic abbreviations (like Me, Et, Ph, Bn, i-Pr), including isomeric groups such as sec-butyl and tert-butyl. It explains the structure and symbolism behind rings, such as phenyl and aryl groups, and elaborates on complex molecules like strychnine, testosterone, and buckminsterfullerene. A major section introduces over 20 functional groups—including alcohols, ethers, amines, esters, amides, nitriles, aldehydes, ketones, carboxylic acids, alkyl halides, and more—with examples from nature, medicine, and industry. Each group is linked to its characteristic reactivity, biological relevance, and representative structures, such as paracetamol (an amide), TNT (a nitro compound), and vitamin C (a polyhydroxylated acid). The chapter also classifies these groups by oxidation level (alkane, alcohol, aldehyde, carboxylic acid, carbon dioxide levels), aiding understanding of redox relationships between functional groups. In the final sections, the text explores systematic IUPAC nomenclature, contrasts it with widely used trivial names, and introduces the concept of hybrid naming. Students learn how to deduce structures from names and vice versa, supported by practical rules for numbering carbon chains and naming branched, cyclic, or multi-substituted compounds. The authors emphasize clarity, realism, and communication in chemical drawing and naming. By the end of the chapter, students are equipped with the fundamental language of organic chemistry, essential for understanding, discussing, and synthesizing organic molecules in future chapters. 📘 Read full blog summaries for every chapter: 📘 Have a book recommendation? Submit your suggestion here: Thank you for being a part of our little Last Minute Lecture family! Organic Chemistry Chapter 2 summary, how to draw organic structures, organic functional groups explained, Clayden organic structure guide, amino acids structure, hydrocarbon frameworks and skeletal formulas, oxidation levels in organic compounds, alkane alkene alkyne structures, functional group classification, organic molecule drawing conventions, IUPAC vs trivial naming, common organic fragments like Me Et Ph Bn, how to name organic molecules, stereochemistry and wedge notation, chapter 2 Clayden Organic Chemistry structural overview Transcript: Welcome to the deep dive. We plunge in complex topics, pull out the key insights, basically help you get truly wellinformed. Today, uh we're tackling the language of organic chemistry. Just imagine trying to decipher a blueprint for something incredibly intricate, like a a molecule that's both beautiful and deadly. Think paleotoxins is natural compound potentially has anticancer uses, but it's also well one of the most toxic things known. It's got what over a hundred carbons, loads of other atoms, all tangled up. Looking at the full structure is kind of mindboggling. But a chemist sees something precise, elegant even. So the big question is how do chemists possibly make sense of all that variety, that intricacy? How do they even start to draw, interpret, and you know, talk about these structures? That's our mission today. We're aiming to give you a shortcut to understanding the fundamental language of organic chemistry. How we draw molecules, how we interpret them, how we communicate. And we're drawing heavily today from chapter two of the big organic chemistry textbook by Clayton Reeves and Warren. Really laying the groundwork for things like mechanistic reasoning and reaction pathways. Exactly. Think of it as learning the essential uh handwriting of chemistry. If you don't get this foundation, the rest of organic chemistry just stays a mystery. It's like trying to build something complex without knowing what the parts even look like. So, we're going to demystify it a bit for you today. show how these seemingly complex structures reveal their secrets through well just a few core principles. It's about spotting patterns, understanding the conventions chemists use every single day. Okay, let's unpack that. So, organic chemistry at its heart, it's the study of carbon compounds. Right. Right. But it's almost never just carbon and hydrogen. Most organic compounds, the interesting ones anyway, they also contain oxygen, nitrogen, maybe sulfur, phosphorus. And those atoms are really where the action is, where the reactivity happens. Oh, okay. So, there's a key distinction then. Absolutely. You've got the molecule skeleton, which is, you know, the chains and rings of carbon and hydrogen. And then you have the functional groups. A really good example is amino acids. Think about glycine, alanine, phenylinine. They have pretty different carbon skeletons, right? Some short chains, some have bulky rings. Yeah, quite varied. But chemically they behave in very similar ways. They all dissolve in water. They're amphoteric, meaning they can act as an acid or a base and they link up to form proteins. And that similarity comes from it comes from their common functional groups. They all have an amino group, that's the NH2 bit, and a caroxyic acid group, the CO2. So functional groups are exactly what they sound like. They're the specific groups of atoms that determine how a molecule functions chemically and biologically. The hydrocarbon framework, the skeleton, it's basically just the support structure. Like our skeleton supports our organs. Precisely. The framework holds the functional groups in place, lets them interact. It's the scaffold. The functional groups are the actual, you know, working parts, the engine. That's a really helpful analogy. And uh speaking of frameworks, how chemists actually draw these things on paper is like a whole language itself, isn't it? Why is getting the drawing right so important? Especially for, you know, talking about reactions and transformations. It's all about clarity and honestly efficiency, especially when you're trying to show how one molecule changes into another, like in a reaction pathway. So, our first guideline is pretty basic. Draw chains of atoms as zigzags. You might see older diagrams with straight lines. Yeah, I've seen those. That was mostly for printing ease back in the day, but X-ray studies show carbon chains aren't straight. They adopt the zigzag arrangement because of the tetrahedral angle about 109° for single bonds. So, our 2D drawings need to reflect that 3D reality, at least roughly. It gives a better feel for the molecule's actual shape, right? Aiming for realism where it counts. But then you mentioned economy, leaving things out. Exactly. We want realism, but not clutter. So, guideline two is miss out the H's attached to carbon atoms and the CH bonds unless you really need them. How does it work? Well, the principle is simple. We just assume any carbon atom that isn't showing four bonds is attached to the right number of hydrogens to make up four bonds. showing every single hydrogen just clutters the diagram. It obscures the important bits, the functional groups. Okay. And you even leave out the C for carbon sometimes. Yep. That's guideline three. Miss out the capital C's for carbon atoms unless necessary. Every kink in that zigzag line and every end of a line we understand that represents a carbon atom. Ah, I see. So the lines themselves represent the carbon chain. Exactly. And this makes the functional groups, the non-carbon bits, stand out much more clearly. Now, of course, there are exceptions to those last two rules. If a carbon or a hydrogen is part of a functional group, like the C and a CO2 group or the H and an aldahhide CHO group, or if it's directly involved in a reaction mechanism you're trying to show, then yeah, you should draw it in. It's all about clarity. You see, not rigid rules. If it helps understanding, include it. If it just adds clutter, leave it out. Got it? So, it's finding that balance, realism, but keeping it clean and focused. What about showing the 3D aspect? You mentioned stereochemistry. How do you get that across on a flat page, right? That's crucial. We use special bonds. A bold solid wedge like a triangle pointing towards you means that bond or group is coming out of the page towards you. Okay? Coming forward. And a dashed or hashed bond looks like dashes receding means that group is going away from you behind the plane of the page. Normal lines are just in the plane of the page. Oh, wedges and dashes. Yep. Wedges and hashes. This lets us show the tetrahedral arrangement around carbon atoms, which is absolutely vital for understanding how molecules fit together, how enzymes recognize them, how they react in 3D space. It's fundamental for stereochemistry. So, to wrap up the drawing part, the goal is realistic, economical, and clear structures. This is genuinely the universal handwriting of chemistry. It's how chemists communicate visually every single day. It really is like learning a secret code. I can definitely see why mastering this is critical before you can really understand reaction pathways or design a synthesis. Okay, now we know how to draw them. Let's dig into the actual building blocks. What kind of shapes does carbon form? Carbon's ability to bond with itself is amazing. It leads to this incredible variety. The simplest structures are just chains. These can be short segments like in polyine. You often see those drawn with wiggly lines at the ends to show the chain keeps going. Or they can be incredibly long and complex like in some natural products. There's an antibiotic called linear mison. Its chain is so long you practically have to wrap it around the page. Wow. And chemists have shorthand for bits of these chains, don't they? Abbreviations. Oh, definitely. For tidiness, we use standard abbreviations for common alkal groups. Methyl is me for a CH3 group. Ethyl is E for CH2CH3. Then PR for propel, buff for bud. So you might see me S instead of writing out CH3S. Exactly. Yeah. Like in the amino acid methionine. It just cleans things up. And then there's the super useful wild card R group. Yeah. R just stands for any alkal group. Basically the rest of the carbon skeleton. It's incredibly handy because like we said, reactivity often depends mostly on the functional group. So R lets us talk about a whole class of compounds without getting bogged down in the specifics of the skeleton. We can just say an alcohol RO or an amino acid where R is the variable side chain. Makes sense generalizing. Okay. So beyond chains there are rings. Kelet and the snake biting its tail comes to mind for benzene. Ah the classic story. Whether it's true or not the benzene ring is absolutely fundamental. It's everywhere. Phenoline, aspirin, paracetamol, you name it. And there's an abbreviation for that too. Yep. If you have a C65 benzene ring attached to something, we call it a phenal group, abbreviated ph. And if it's any kind of substituted phenal ring, we can use the wild card aerrol or R. Okay, ph and r. But rings aren't just benzene. You get huge rings like the 13 membered ring in muscone, which gives musket smell or fused rings like in steroid hormones, testosterone, ostradial complex frameworks with huge biological effects. And for simple rings, we just use the prefix cylo. Cyclopropane, a triangle, cyclloutane, a square. Cycllohexane, a hexagon. You find these in all sorts of places from insecticides to artificial sweeteners. And it gets even more complex because these chains and rings can be branched, right? Yes, absolutely. Hydrocarbon frameworks are often branched. And again, we have common names and abbreviations for these branched bits. Isopropyl or IPR looks like a Yshape. Okay. You find it in things like LDA, lithium daopropylomide, a really common strong base used in synthesis. Then there's isobutal, IBU, which is an ibuprofen, sebutal, SBU, turtbutle, TBU. The tbutle group is particularly bulky. You see it in antioxidants like BHT. All these branches, does that affect how they react? Hugely. And this brings us to a really crucial concept for understanding reactions for mechanistic reasoning. We classify carbon atoms as primary, secondary, or tertiary. How does that work? It's simple. It's just the number of other carbon atoms attached directly to the carbon you're looking at. A primary carbon one is attached to one other carbon. Secondary 2° to two others. Tertiary 3° to three others. There's even quadinary four degrees attached to four. And why is that important? Think about access. A primary carbon is like being at the end of a quiet street. A tertiary carbon is like being in the middle of a busy cluttered junction. This clutter, we call it steric hindrance, makes a big difference in how easily other molecules can get close enough to react with that carbon. It profoundly affects reactivity. Understanding this is fundamental if you're trying to predict how a reaction will go or design a drug molecule to fit into a specific biological target. So the shape, the branching, it all feeds into the reactivity. Mhm. But you said earlier the real heart of the reactivity lies with the functional groups. That's right. The framework is the stage, but the functional groups are the actors. Think about ethane just C2H6. It's pretty unreactive, right? It mainly just burns. But ethanol, C2H50, which just has that one extra oxygen atom in the hydroxal group, the group, suddenly it reacts with acids, bases, oxidizing agents. It's much more chemically versatile. All because of the O group. All because of the O group. that hydroxal group is the source of ethanol's reactivity. And other alcohols, no matter what their carbon skeleton, the R part looks like, they tend to have similar chemical properties because they all have that O group. So recognizing functional groups and knowing their general character is the absolute key to understanding and predicting organic reactions and pathways. Okay, so let's maybe uh tour some of the important ones. Good idea. Let's just focus on recognition and general character for now. Not the deep reactivity details yet. First up, compounds with carbon carbon multiple bonds. We have alkenes which contain a CC double bond. They're called unsaturated and they readily undergo addition reactions. Think of things like pine, that classic pine smell. Okay, double bonds. Then alans, which have a C triple bond. These are linear around the triple bond and generally even more reactive than alkanes. You find them in some complex natural products like certain anti-tumor agents. Triple bonds, right? Okay. Next, let's look at some common oxygen containing groups. Alcohols, we've mentioned have the hydroxal group RO. They're often water soluble involved in hydrogen bonding. Think sugars like sucrose full of O groups. Then ethers with the structure R12. They link two alkal groups through an oxygen. Dithyl ether is a classic anesthetic. THF is a common solvent. Some natural toxins have incredibly complex structures made of multiple ether rings. Got it. Alcohols and ethers. Moving on to nitrogen containing groups. Amines R NH2 or R2 NH R3N contain the amino group. They're often basic, sometimes smelly, think decaying fish, but also vital components of amino acids, DNA bases, many drugs like impetamine. Nitro compounds RNO2 have the nitro group. This group can make molecules explosive like TNT, but it's also found in some pharmaceuticals like the sleeping pill nitrospam. Okay, amines and nitro groups. And we can't forget halogen containing groups. Alkhalides RX where X is a halogen florine, chlorine, bromine or iodine. Their reactivity varies a lot depending on the halogen and the structure. Think PVC plastic, polyvinyl chloride or methyl iodide. A common lab chemical, right? The halogens. Okay. Now for a really important class, carbonal compounds. These all contain the CO group, the carbonal group. This is probably one of the most versatile functional groups there is. C bond O. Exactly. Within this family you have aldahhides general structure RCH and ketones R1 CO2. They're both reactive often targets for nucleophiles. Acetldahhide is formed when ethanol is oxidized in the liver. Acetone is a common ketone solvent. Chanel number five has an aldahhide. Raspberry flavor comes from a ketone. And a crucial drawing point here. Always draw aldahhides as RCH showing the H attached to the C. Never write RCO. Why not RCO? Because RCO implies the hydrogen is bonded to the oxygen like in an alcohol which is completely different. RCH means the hydrogen is bonded to the carbonal carbon. Totally different reactivity, different functional group family. It's a really important distinction for mechanisms. Ah okay, that makes sense. CO not CO. Right. Now related to carbonals we have caroxyic acids RCO2. They contain the caroxile group. As the name suggests they're acidic. citric acid, malic acid, tartaric acid in fruits. Those are all caroxyic acids. And derivatives of caroxilic acids include esters, R1, CO2, R2. They make up fats and oils. And many have characteristic fruity or floral smells. Think banana, rum, apple flavors. The difference between saturated and unsaturated fats relates to the R groups in these esters. Esters, right? Like in polyester fabrics, too. Well, related bonds. Yes. Then a catalyh or with nkal groups. These are incredibly important. Biologically, the peptide bond linking amino acids in proteins is an amide bond. Aspartame, the artificial sweetener, is related. Paracetamol is an amide. We also have nitrals or cyanides, RCN with a carbon nitrogen triple bond. And isil chlorides, RC, which are very reactive versions of caroxilic acids, mainly used as intermediates in the lab. Wow, that's quite a list. Yeah, it really shows how just combining C H O N and H hallogens in different ways creates this huge diversity of chemical behavior. It really does. And recognizing these groups is step one. Step two is understanding how they interrelate which brings us to that oxidation level concept you mentioned. How does that help connect these different groups? Ah yes, this is a really neat way to organize functional groups and start thinking about transformations specifically oxidation and reduction reactions which are fundamental in organic chemistry. Basically we group functional groups based on the number of bonds a particular carbon atom has to hetereroatoms. Remember hetereroatoms are just atoms other than C or H. So typically O N H hallogens. Okay. Number of bonds to non-carbon hydrogen atoms. Exactly. At the bottom you have the alkan oxidation level. Here the carbon has zero bonds to hetereroatoms. Think methane or any carbon in a simple alkan chain. One step up is the alcohol oxidation level. This means the carbon has one bond to a hetereroatom. This includes alcohols, CO, ether, CO, alkalhalides, CX. Interestingly, we often group alkans here too because you can make them from alcohols without formal oxidation or reduction, just elimination. Okay, one bond to NX. Then comes the aldahhide oxidation level with two bonds from the carbon to hetereroatoms. This covers aldahhides. CO counts as two bonds to O ketones. CO again and also things like acetals where carbon has two single bonds to oxygen atoms or even dchlorommethane CH2 CL2 where carbon has two bonds to chain three bonds makes sense. Next the caroxilic acid oxidation level signifies three bonds to hetereroatoms. This is where caroxilic acid CO and CO count as three esters CO amides COC nitrals CON is three and chloride COC all fit three bonds. Okay, I see the pattern. And finally, the carbon dioxide oxidation level with four bonds to hetereroatoms. This includes CO2 itself, two CEDO bonds, carbon tetrachloride, CL4, and things like carbonates. So, grouping them like this, it helps you see how you might convert one to another, like going up or down the ladder. Yeah, precisely. Oxidation generally moves you up this ladder, eg alcohol to aldahhide to caroxilic acid, while reduction moves you down. It provides a framework for understanding transformations and planning synthetic routes. It's a powerful way to see deeper connections between functional groups that might look quite different on the surface. It really does add another layer of understanding. Okay, we know how to draw them. We know the building blocks and functional groups. We can even classify them by oxidation level. What about actually naming these things? There seem to be different kinds of names flying around. Ah, nomenclature. Yeah, it can seem a bit chaotic. Broadly, you have two main types of names. There are trivial names which are often historical, maybe based on where the compound was first found, like paletoxin from a coral or muscone from musk deer. They're often simple and easy to remember for common stuff, like acetic acid for vinegar acid. Exactly. But with millions of known compounds, you can't have a unique trivial name for everything. That's where systematic nomenclature comes in. Primarily, the rules set out by IUPAC, the International Union of Pure and Applied Chemistry. the official rules, right? The idea is that every compound should have a unique unambiguous name that you can deduce directly from its structure and you should be able to draw the structure if you're given the systematic name. These names describe the hydrocarbon framework, identify the functional groups, and use numbers to say exactly where everything is attached. Like propanol tells you it's a threecarbon chain. Propan it's an alcohol and the O group is on carbon number one. Okay, logical but maybe a bit complex. Sometimes they can get very complex. For simple benzene rings with two things attached though, chemists often use an older but still systematic and often easier system. Ortho, meta, and par. Ortho means the groups are neighbors on the ring. Positions one and two. Meta means there's one carbon in between one and three. Par means they're directly opposite each other, one and four. Ortho, meta, parah, like neighbors, one house down across the street. That's a great way to think of it. Much easier than saying 1 V2 dimethylbenzene when you can just say orthocyline. Okay, but here's what I really want to know. What do chemists actually call compounds dayto-day in the lab or when they're talking to each other? Because those long IUPAC names. Uh-huh. Yeah, you're right. Full systematic names for anything moderately complex are just too clumsy for everyday conversation or even writing in the main text of a paper. So, what really happens? First and foremost, the supremacy of the structural diagram. A clear drawing is almost always better than any name. The best practice is always give a diagram alongside a name. Unless it's something incredibly simple and unambiguous like ethanol or acetone, draw first, name second. Diagrams rule. Okay. What about names then? For really common simple compounds, trivial names are absolutely standard out of habit. Acetone, ethyl, acetate, touine, phenol, acetalahhide, acetic acid, formic acid, benzene, pyodine, analine. Chemists use these names constantly without thinking twice. The old familiar names stick around. They do. Then there are names for specific mechanistically important fragments. Knowing vinyl versus alloy is crucial. A vinyl group is attached directly to a double bond carbon CCX. An alloy group is attached to the carbon next to the double bond CCX. They have totally different reactivity which is key for understanding reaction pathways. Ah subtle but important difference. Very important. Same with phenol. PH versus benzel. BN. Phenol means attached directly to the benzene ring. Benzel means attached to a CH2 group which is then attached to the ring. Again, completely different reactivity. PHCL is very unreactive. BNCL is quite reactive. Got it. PH direct, BNL, one step away. Exactly. Now, for those really complex natural products, Strick 9, penicellin, vitamin B12, paletoxin, nobody uses systematic names. They're impossibly long and unwieldy. They're always referred to by their common trivial names, usually alongside a diagram in publications. Same goes for amino acids, always alanine, lucine, etc. Never the systematic name. Okay, so common names for the really complex stuff, too. And then there are acronyms. Labs run on acronyms for common solvents and regions. THF for tetrahydrofuron, DMF for dimethyl formide, DMSO for dimethyl sulfoxide, regions like LDA, dybel, PCC dead. You just learn them. It's essential shortorthhand. So when do chemists actually use those systematic IU pack names? They're mostly used for moderately simple compounds, maybe with 5 to 20 carbons, chains or rings where there isn't a common trivial name. Something like non-tool, a 9baldahhide, or cycllocta 142 dane, an eight membered ring with two double bonds. The names might be a bit long, but they precisely describe the structure, which is vital for clear communication when no simpler name exists. And for brand new compounds from research, for new compounds in the main text of a research paper, chemists will almost always give it a simple tag name like the yellow ketone or more formally compound 5 or amine 12b. And crucially, they'll show the structure right there. The full painstaking IUP pack name is usually buried in the experimental section at the end, more for the official record than for easy reading. That makes so much practical sense. It does. So, the take-home advice on naming, draw the structure first. Always learn the names of the functional groups. cold that tells you about reactivity. Know the common trivial names for the everyday simple molecules. Use tag names with diagrams for complex or new things in discussion. Understand the principles of IUPAC so you can decipher a name if you need to, especially for those medium-siz molecules. And maybe keep a little notebook for acronyms and structures you encounter. The absolute key is never just skip over a name without knowing the structure it represents. What an absolutely fantastic deep dive. We've really covered the foundations here. Drawing molecules economically and realistically, understand the framework versus the allimportant functional groups and then navigating the uh sometimes tricky world of naming them. It truly feels like learning the grammar of chemistry and getting this right. This structural literacy as you called it seems absolutely essential for everything comes next. Understanding mechanisms, predicting reactions, figuring out stereochemistry, even starting to think backwards with retroynthesis. Absolutely. If we connect this to the bigger picture, understanding how molecules are built and named isn't just an identification exercise. It is the absolutely critical first step to predicting how they'll behave, how they'll react, transform, interact with other molecules, whether in a flask or in a living cell. This structural fluency is what empowers chemists to understand everything from, yes, the most potent toxins to the most beneficial life-saving drugs. It's the foundation for designing new materials, new catalysts, new medicines. It all starts with understanding the architecture. Well, continue to dive deep with us as we keep unraveling the intricate world of chemistry, one concept at a time. Thinking about all this, what stands out to you about how chemists communicate these incredibly complex ideas? Keep those thoughts and questions coming. And as always, thank you for being part of the last minute lecture
189342
https://msp.org/ant/2023/17-10/ant-v17-n10-p05-p.pdf
Algebra & Number Theory msp Volume 17 2023 No. 10 Differences between perfect powers: prime power gaps Michael A. Bennett and Samir Siksek msp ALGEBRA AND NUMBER THEORY 17:10 (2023) Differences between perfect powers: prime power gaps Michael A. Bennett and Samir Siksek We develop machinery to explicitly determine, in many instances, when the difference x2 −yn is divisible only by powers of a given fixed prime. This combines a wide variety of techniques from Diophantine approximation (bounds for linear forms in logarithms, both archimedean and nonarchimedean, lattice basis reduction, methods for solving Thue–Mahler and S-unit equations, and the primitive divisor theorem of Bilu, Hanrot and Voutier) and classical algebraic number theory, with results derived from the modularity of Galois representations attached to Frey–Hellegoaurch elliptic curves. By way of example, we completely solve the equation x2 + qα = yn, where 2 ≤q < 100 is prime, and x, y, α and n are integers with n ≥3 and gcd(x, y) = 1. 1. Introduction 1790 2. Reduction to S-integral points on elliptic curves for n ∈{3, 4} 1794 3. An elementary approach to x2 −q2k = yn with y odd 1796 4. Lucas sequences and the primitive divisor theorem 1798 5. The equation x2 + q2k = yn: the proof of Theorem 4 1799 6. The equation x2 −q2k = yn with y even: reduction to Thue–Mahler equations 1802 7. The modular approach to Diophantine equations: some background 1804 8. The equation x2 −q2k = yn with y even: an approach via Frey curves 1805 9. The equation x2 −q2k = yn: an upper bound for the exponent n 1809 10. The equation x2 −q2k = yn: proof of Theorem 5 1813 11. The equation x2 + q2k+1 = yn with y odd 1815 12. The equation x2 + (−1)δq2k+1 = y5 1818 13. Frey–Hellegouarch curves for a ternary equation of signature (n, n, 2) 1820 14. The equation x2 ± q2k+1 = yn and proofs of Theorems 2 and 3 1823 15. The proof of Theorem 1: large exponents 1833 16. Concluding remarks 1844 Acknowledgments 1844 References 1844 Bennett is supported by NSERC. Siksek is supported by an EPSRC Grant EP/S031537/1 “Moduli of elliptic curves and classical Diophantine problems”. MSC2020: primary 11D61; secondary 11D41, 11F80. Keywords: exponential equation, Lucas sequence, shifted power, Galois representation, Frey curve, modularity, level lowering, Baker’s bounds, Hilbert modular forms, Thue–Mahler equations. © 2023 MSP (Mathematical Sciences Publishers). Distributed under the Creative Commons Attribution License 4.0 (CC BY). Open Access made possible by subscribing institutions via Subscribe to Open. 1790 Michael A. Bennett and Samir Siksek 1. Introduction The Lebesgue–Nagell equation x2 + D = yn (1) has a very extensive literature, motivated, at least in part, by attempts to extend Mih˘ ailescu’s theorem (Catalan’s conjecture) to larger gaps in the sequence of perfect powers, in an attempt to attack Pillai’s conjecture . In (1), we will suppose that x and y are coprime nonzero integers, and that the prime divisors of D belong to a fixed, finite set of primes S. Under these assumptions, bounds for linear forms in logarithms, p-adic and complex, imply that the set of integer solutions (x, y, n) to (1), with |y| > 1 and n ≥3, is finite and effectively determinable. If, in addition, we suppose that D is positive and that y is odd, then these solutions may be explicitly determined, provided |S| is not too large, through appeal to the primitive divisor theorem of Bilu, Hanrot and Voutier [Bilu et al. 2001], in conjunction with techniques from Diophantine approximation. If either D > 0 and y is even, or if D < 0, the primitive divisor theorem cannot be applied to solve (1) and we must work rather harder, appealing to either bounds for linear forms in logarithms or to results based upon the modularity of Galois representations associated to certain Frey–Hellegouarch elliptic curves. In a companion paper [Bennett and Siksek 2023], we develop machinery for handling (1) in the first difficult case where D > 0 and y is even. Though the techniques we discuss in the present paper are more widely applicable, we will, for simplicity, restrict attention to the case where D in (1) is divisible by a single prime q, whilst treating both the cases D < 0 and D > 0. That is, we will concern ourselves primarily with the equation x2 + (−1)δqα = yn, q ∤x, (2) where δ ∈{0, 1} and α is a nonnegative integer. In the case δ = 0, our main result is the following. Theorem 1. If x, y, q, α and n are positive integers with q prime, 2 ≤q < 100, q ∤x, n ≥3 and x2 + qα = yn, (3) then (q, α, y, n) is one of (2, 1, 3, 3), (2, 2, 5, 3), (2, 5, 3, 4), (3, 5, 7, 3), (3, 4, 13, 3), (7, 1, 2, 3), (7, 3, 8, 3), (7, 1, 32, 3), (7, 2, 65, 3), (7, 1, 2, 4), (7, 2, 5, 4), (7, 1, 2, 5), (7, 1, 8, 5), (7, 1, 2, 7), (7, 3, 2, 9), (7, 1, 2, 15), (11, 1, 3, 3), (11, 1, 15, 3), (11, 2, 5, 3), (11, 3, 443, 3), (13, 1, 17, 3), (17, 1, 3, 4), (19, 1, 7, 3), (19, 1, 55, 5), (23, 1, 3, 3), (23, 3, 71, 3), (23, 3, 78, 4), (23, 1, 2, 5), (23, 1, 2, 11), (29, 2, 5, 7), (31, 1, 4, 4), (31, 1, 2, 5), (31, 1, 2, 8), (41, 2, 29, 4), (41, 2, 5, 5), (47, 1, 6, 3), (47, 1, 12, 3), (47, 1, 63, 3), (47, 2, 17, 3), (47, 3, 74, 3), (47, 1, 3, 5), (47, 1, 2, 7), (53, 1, 9, 3), (53, 1, 29, 3), (53, 1, 3, 6), (61, 1, 5, 3), (67, 1, 23, 3), (71, 1, 8, 3), (71, 1, 6, 4), (71, 1, 3, 7), (71, 1, 2, 9), (79, 1, 20, 3), (79, 1, 2, 7), (83, 1, 27, 3), (83, 1, 3, 9), (89, 1, 5, 3), (97, 2, 12545, 3), (97, 1, 7, 4). Differences between perfect powers: prime power gaps 1791 One might note that the restriction q ∤x can be removed, with a modicum of effort, at least for certain values of q. The cases where primitive divisor arguments are inapplicable correspond to q ∈ {7, 23, 31, 47, 71, 79} and y even (and this is where the great majority of work lies in proving Theorem 1). If q = 7, Theorem 1 generalizes recent work of Koutsianas , who established a similar result under certain conditions upon α and q, and, in particular, showed that (3) has no solutions with q = 7 and prime n ≡13, 23 (mod 24). We note that the solution(s) with q = 83 were omitted in the statement of Theorem 1 of Berczes and Pink . Our results for (2) with δ = 1 are less complete, at least when α is odd. Theorem 2. Suppose that q ∈{7, 11, 13, 19, 23, 29, 31, 43, 47, 53, 59, 61, 67, 71, 79, 83}. (4) If x and n are positive integers, q ∤x, n ≥3 and x2 −q2k+1 = yn, (5) where y and k are integers, then (q, k, y, n) is one of (7, 2, 393, 3), (7, 1, −3, 5), (11, 1, 37, 3) (11, 0, 5, 5), (11, 1, 37, 3), (13, 0, 3, 5), (19, 0, 5, 3), (19, 2, −127, 3), (19, 0, −3, 4), (19, 0, 3, 4), (23, 1, 1177, 3), (31, 0, −3, 3), (43, 0, −3, 3), (71, 0, 5, 3), (71, 1, −23, 3), (79, 0, 45, 3). To the best of our knowledge, these are the first examples of primes q for which (5) has been completely solved (though the cases with k = 0 are treated in the thesis of Barros ). There are eight other primes in the range 3 ≤q < 100 for which we are unable to give a similarly satisfactory statement. For four of these, namely q = 3, 5, 17 and 37, the equation (5) has a solution with y = ±1. For such primes we are unaware of any results that would enable us to completely treat fixed exponents n of moderate size; this difficulty is well known for the D = −2 case of (1). One should note that it is relatively easy to solve (5) for q ∈{3, 5, 37}, under the additional assumption that y is even (and somewhat harder if q = 17 and y is even). For the other four primes, namely q = 41, 73, 89 and 97, we give a method which appears theoretically capable of success, but is alas prohibitively expensive, computationally speaking. We content ourselves by proving the following modest result for these primes. Theorem 3. Let q ∈{41, 73, 89, 97}. The only solutions to (5) with q ∤x and 3 ≤n ≤1000 are with (q, k, y, n) equal to one of (41, 0, −2, 5), (41, 0, 2, 3), (41, 0, 2, 7), (41, 1, 10, 5), (73, 0, −6, 4), (73, 0, −4, 3), (73, 0, 2, 3), (73, 0, 3, 3), (73, 0, 6, 3), (73, 0, 6, 4), (73, 0, 72, 3), (89, 0, −4, 3), (89, 0, −2, 3), (89, 0, 2, 5), (89, 0, 2, 13), (97, 0, 2, 7). There are no solutions to (5) with n > 1000, q ∤x and either q = 73 and y ≡0 (mod 2), or with q = 97 and y ≡1 (mod 2). 1792 Michael A. Bennett and Samir Siksek The additional assumption that the exponent of our prime q is even simplifies matters considerably. In the case of (3), Berczes and Pink deduced Theorem 1 for even values of α (whence primitive divisor technology works efficiently). For completeness, we extend this to q < 1000; the results for q < 100 are, of course, just a special case of Theorem 1. Theorem 4. If x, y, q, k and n are positive integers with q prime, 2 ≤q < 1000, q ∤x, n ≥3 and x2 + q2k = yn, (6) then (q, k, y, n) is one of (2, 1, 5, 3), (3, 2, 13, 3), (7, 1, 65, 3), (7, 1, 5, 4), (11, 1, 5, 3), (29, 1, 5, 7), (41, 1, 29, 4), (41, 1, 5, 5), (47, 1, 17, 3), (97, 1, 12545, 3), (107, 1, 37, 3), (191, 1, 65, 3), (239, 1, 169, 4), (239, 1, 13, 8), (431, 1, 145, 3), (587, 1, 197, 3), (971, 1, 325, 3). More interesting for us is the case where the difference x2 −yn is positive (so that primitive divisor arguments are inapplicable and there are no prior results available in the literature). We prove the following. Theorem 5. If x, q, k and n are positive integers with q prime, 2 ≤q < 1000, q ∤x, n ≥3 and x2 −q2k = yn, (7) where y is an integer, then (q, k, y, n) is one of (3, 1, −2, 3), (3, 1, 40, 3), (3, 1, ±2, 4), (3, 2, −2, 5), (5, 2, 6, 3), (7, 2, 15, 3), (7, 1, 2, 5), (11, 1, 12, 3), (11, 2, 3, 5), (13, 1, 3, 3), (13, 1, 12, 5), (17, 1, −4, 3), (17, 1, ±12, 4), (17, 2, 42, 3), (29, 1, −6, 3), (31, 1, 2, 7), (43, 1, −12, 3), (43, 1, 126, 3), (43, 4, 96222, 3), (47, 1, 6300, 3), (53, 1, 6, 3), (71, 1, 30, 3), (71, 2, −136, 3), (89, 1, 84, 3), (97, 2, 3135, 3), (101, 1, 24, 3), (109, 1, 20, 3), (109, 1, 35, 3), (109, 1, 570, 3), (127, 1, −10, 3), (127, 1, 8, 3), (127, 1, 198, 3), (127, 1, 2, 9), (179, 1, −30, 3), (193, 1, 63, 3), (197, 1, 260, 3), (223, 1, 30, 3), (251, 1, −10, 3), (251, 1, −6, 5), (257, 1, −4, 5), (263, 1, 2418, 3), (277, 1, −30, 3), (307, 1, 60, 3), (307, 1, 176, 3), (307, 2, 2262, 3), (359, 1, −28, 3), (383, 2, 25800, 3), (397, 1, −42, 3), (431, 1, 12, 3), (433, 1, −12, 3), (433, 1, 143, 3), (433, 2, 26462, 3), (479, 1, 90, 3), (499, 1, −12, 5), (503, 1, 828, 3), (557, 1, −60, 3), (577, 1, ±408, 4), (593, 1, −70, 3), (601, 1, 72, 3), (659, 1, 42, 3), (683, 1, 193346, 3), (701, 1, 4452, 3), (727, 1, 18, 3), (739, 1, 234, 3), (769, 1, 255, 3), (811, 1, −70, 3), (857, 1, −72, 3), (997, 1, 48, 3). We note that, with sufficient computational power, there is no obstruction to extending the results of Theorems 4 and 5 to larger prime values q. Without fundamentally new ideas, it is not clear that the same may be said of, for example, Theorem 1. In this case, the bounds we obtain upon the exponent n via linear forms in logarithms, even for relatively small q, leave us with a computation which, while finite, is barely tractable. Differences between perfect powers: prime power gaps 1793 Equation (8) has been completely resolved [Ivorra 2003; Siksek 2003] for q = 2, except for the case (α, δ) = (1, 1) which corresponds to D = −2 in (1). The solutions for q = 2 in our theorems are included for completeness. For the remainder of the paper, we suppose that q is an odd prime. In particular, we are concerned with the equation x2 + (−1)δqα = yn, gcd(x, y) = 1, α > 0, (8) where q is a fixed odd prime, n ≥3, and δ ∈{0, 1}. Our proofs will use a broad combination of techniques, which include • lower bounds for linear forms in complex and p-adic logarithms which yield bounds for the exponent n in (8); • Frey–Hellegouarch curves and their Galois representations which provide a wealth of local informa-tion regarding solutions to (8); • the celebrated primitive divisor theorem of Bilu, Hanrot and Voutier, that can be used to treat most cases of (8) when y is odd and δ = 0; • elementary descent arguments that reduce (8) for a fixed exponent n to Thue–Mahler equations, which are possible to resolve thanks to the Thue–Mahler solver associated to [Gherga and Siksek 2022]. The outline of this paper is as follows. In Section 2, we solve the equation x2 + (−1)δqα = yn for n ∈{3, 4} and 3 ≤q < 100 by reducing the problem to the determination of S-integral points on elliptic curves. In Section 3, we solve the equation x2 −q2k = yn, for q in the range 3 ≤q < 1000, with y odd, using an elementary sieving argument; this completes the proof of Theorem 5 in the case y is odd. Next, Section 4 provides a short overview of Lucas sequences, their ranks of apparition, and the primitive divisor theorem of Bilu, Hanrot and Voutier. We make use of this machinery in Section 5 to solve the equation x2 + q2k = yn for q in the range 3 ≤q < 1000, thereby proving Theorem 4. Section 6 reduces the equation x2 −q2k = yn, for even values of y, to Thue–Mahler equations of the form yn 1 −2n−2yn 2 = qk. (9) In Section 7, we give a brief outline of the modular approach to Diophantine equations. Section 8 applies this modular approach, particularly the (n, n, n) Frey–Hellegouarch elliptic curves of Kraus , to (9); this allows us to deduce that there are no solutions for 3 ≤q < 1000 except for possibly q ∈{31, 127, 257}, where the mod n representation of the Frey–Hellegouarch curve arises from that of an elliptic curve with full 2-torsion and conductor 2q. Before we can complete the proof of Theorem 5, we need an upper bound for the exponent n. We give a sharpening of Bugeaud’s bound for the equation x2 −q2k = yn, which uses (9) and the theory of linear forms in real and p-adic logarithms. In Section 10, we complete the proof of Theorem 5; our approach makes use of a sieving technique that builds on the information obtained from the modular approach in Section 8 and the upper bound for n established in Section 9. The remainder of the paper is concerned with (8) where α = 2k + 1, and for 3 ≤q < 100. In Section 11, we 1794 Michael A. Bennett and Samir Siksek solve x2 + q2k+1 = yn with y odd with the help of the primitive divisor theorem, and in Section 12 we solve x2 −q2k+1 = y5 by reducing to Thue–Mahler equations. It remains, then, to handle the equations x2 −q2k+1 = yn and x2 +q2k+1 = yn where, in the latter case, we may additionally assume that y is even. In Section 13, we study the more general equation yn + qαzn = x2, gcd(x, y) = 1, (10) where q is prime, using Galois representations of Frey–Hellegouarch curves. Our approach builds on previous work of Bennett and Skinner , and also on the work of Ivorra and Kraus . We then restrict ourselves in Section 14 to the case z = ±1 and α odd in (10). In this section, we develop a variety of sieves based upon local information coming from the Frey–Hellegouarch curves that allows us, in many situations, to eliminate values of q from consideration completely and, in the more difficult cases, to solve (8) for a fixed pair (q, n). In particular, we employ this strategy to complete the proofs of Theorems 2 and 3. Finally, in Section 15, we return to bounds for linear forms in p-adic and complex logarithms to derive explicit upper bounds upon n in (8), and then report upon a (somewhat substantial) computation to use the arguments of Section 14 to solve (8) for all remaining pairs (q, n) required to finish the proof of Theorem 1. 2. Reduction to S-integral points on elliptic curves for n ∈{3, 4} In the following sections, it will be of value to us to assume that the exponent n in (8) is not too small. This is primarily to ensure that the Frey–Hellegouarch curve we attach to a putative solution has a corresponding mod n Galois representation that is irreducible. For suitably large prime values of n (typically, n ≥7), the desired irreducibility follows from Mazur’s isogeny theorem. In Section 4, such an assumption allows us to (mostly) ignore so-called defective Lucas sequences. In this section, we treat separately the cases n = 3 and n = 4 for q < 100, where the problem of solving (8) reduces immediately to one of determining S-integral points on specific models of genus one curves; here S = {q}. This approach falters for many values of q in the range 100 < q < 1000 as we are often unable to compute the Mordell–Weil groups of the relevant elliptic curves. Thus for the proofs of Theorems 4 and 5 for exponents n = 3, n = 4, where we treat values of q less than 1000, we shall employ different techniques including sieving arguments and reduction to Thue–Mahler equations. The case n = 3. Supposing that we have a coprime solution to (8) with n = 3, we can write α = 6b + c, where 0 ≤c ≤5. Taking X = y/q2b and Y = x/q3b, it follows that (X, Y) is an S-integral point on the elliptic curve Y 2 = X3 + (−1)δ+1qc, (11) where S = {q}. Here, for a particular choice of δ ∈{0, 1} and prime q we may use the standard method for computing S-integral points on elliptic curves based on lower bounds for linear forms in elliptic logarithms (e.g., [Peth˝ o et al. 1999]). We made use of the built-in Magma [Bosma et al. 1997] implementation of this Differences between perfect powers: prime power gaps 1795 q δ α x y 2 0 1 5 3 2 0 2 11 5 2 1 1 1 −1 2 1 7 71 17 2 1 9 13 −7 2 1 3 3 1 3 0 4 46 13 3 0 5 10 7 3 1 1 2 1 3 1 2 1 −2 3 1 2 253 40 5 1 1 2 −1 5 1 4 29 6 7 0 1 1 2 7 0 1 181 32 7 0 2 524 65 7 0 3 13 8 7 1 4 76 15 7 1 5 7792 393 11 0 1 4 3 11 0 1 58 15 11 0 2 2 5 11 0 3 9324 443 11 1 2 43 12 11 1 3 228 37 13 0 1 70 17 13 1 2 14 3 17 1 1 3 −2 17 1 1 4 −1 q δ α x y 17 1 1 5 2 17 1 1 9 4 17 1 1 23 8 17 1 1 282 43 17 1 1 375 52 17 1 7 21063928 76271 17 1 1 378661 5234 17 1 2 15 −4 17 1 4 397 42 19 0 1 18 7 19 1 1 12 5 19 1 5 654 −127 23 0 1 2 3 23 0 3 588 71 23 1 3 40380 1177 29 1 2 25 −6 31 1 1 2 −3 37 1 1 6 −1 37 1 1 8 3 37 1 1 3788 243 37 1 3 228 11 41 1 1 7 2 43 1 1 4 −3 43 1 2 11 −12 43 1 8 30042907 96222 43 1 2 1415 126 47 0 1 13 6 47 0 1 41 12 47 0 1 500 63 47 0 2 52 17 q δ α x y 47 0 3 549 74 47 1 2 500047 6300 53 0 1 26 9 53 0 1 156 29 53 1 2 55 6 61 0 1 8 5 67 0 1 110 23 71 0 1 21 8 71 1 1 14 5 71 1 2 179 30 71 1 3 588 −23 71 1 4 4785 −136 73 1 1 3 −4 73 1 1 9 2 73 1 1 10 3 73 1 1 17 6 73 1 1 611 72 73 1 1 6717 356 79 0 1 89 20 79 1 1 302 45 83 0 1 140 27 89 0 1 6 5 89 1 1 5 −4 89 1 1 9 −2 89 1 1 33 10 89 1 1 408 55 89 1 2 775 84 97 0 2 1405096 12545 97 1 1 77 18 97 1 4 175784 3135 Table 1. Solutions to the equation x2 +(−1)δqα = y3 for primes 2 ≤q < 100, δ ∈{0, 1} and x, y, α integers satisfying α > 0, x > 0, y ̸= 0, and gcd(x, y) = 1. method to compute these S-integral points on (11) for δ ∈{0, 1} and 2 ≤q < 100. We obtained a total of 83 solutions to (8) for these values of q with α > 0, x > 0, y ̸= 0 and gcd(x, y) = 1. These are given in Table 1. The case n = 4. Next we consider the case n = 4 separately. Write α = 4b + c where 0 ≤c ≤3. Let X = (y/qb)2, Y = xy/q3b. Then (X, Y) is an S-integral point on the elliptic curve Y 2 = X(X2 + (−1)δ+1qc), (12) 1796 Michael A. Bennett and Samir Siksek q δ α x y 2 0 5 7 3 2 1 3 3 1 3 1 1 2 1 3 1 5 122 11 3 1 2 5 2 q δ α x y 7 0 1 3 2 7 0 2 24 5 17 0 1 8 3 17 1 2 145 12 19 1 1 10 3 q δ α x y 23 0 3 6083 78 31 0 1 15 4 41 0 2 840 29 71 0 1 35 6 73 1 1 37 6 97 0 1 48 7 Table 2. Solutions to the equation x2 +(−1)δqα = y4 for primes 2 ≤q < 100, δ ∈{0, 1} and x, y, α integers satisfying α > 0, x > 0, y > 0, and gcd(x, y) = 1. where S = {q}. We again appealed to the built-in Magma [Bosma et al. 1997] implementation of this method to compute these S-integral points on (12) for δ ∈{0, 1} and 2 ≤q < 100. We obtained a total of 16 solutions to (8) for these values of q with α > 0, x > 0, y > 0 and gcd(x, y) = 1. These are given in Table 2. 3. An elementary approach to x2 −q2k = yn with y odd In this section, we apply an elementary factorization argument to prove Theorem 5 for y odd. In other words, we consider the equation x2 −q2k = yn, x, k, n positive integers, n ≥3, gcd(x, y) = 1, y an odd integer. (13) Here q ≥3 is a prime. From this, we immediately see that x + qk = yn 1 and x −qk = yn 2, (14) with y = y1y2, so that we have yn 1 −yn 2 = 2qk. (15) If 2 | n, then yn 1 ≡yn 2 ≡1 (mod 4), a contradiction. We may suppose henceforth, without loss of generality, that n is an odd prime. Observe that (y1 −y2)(yn−1 1 + yn−2 1 y2 + · · · + yn−1 2 ) = yn 1 −yn 2 = 2qk. (16) Clearly y1 > y2 and, as they are both odd, y1 −y2 ≥2 and 2 | (y1 −y2). Write d = gcd(y1 −y2, yn−1 1 + yn−2 1 y2 + · · · + yn−1 2 ) so that y2 ≡y1 (mod d) and 0 ≡yn−1 1 + yn−2 1 y2 + · · · + yn−1 2 ≡nyn−1 1 (mod d). Similarly, we have nyn−1 2 ≡0 (mod d) and so d ∈{1, n}. Differences between perfect powers: prime power gaps 1797 We first deal with the case d = n, whereby, from (16), q = n. Let r = ordn(y1 −y2) ≥1 and write y1 = y2 + nrκ where n ∤κ. Then ordn(yn−1 1 + yn−2 1 y2 + · · · + yn−1 2 ) = ordn (y2 + nrκ)n −yn 2 nrκ  = 1. Hence y1 −y2 = 2nk−1 and yn−1 1 + yn−2 1 y2 + · · · + yn−1 2 = n, (17) and so n = n−1 Y i=1 |y1 −ζ i ny2| ≥ |y1| −|y2| n−1. Recall that y1 and y2 are both odd. If y2 ̸= ±y1, then the right-hand side of this last inequality is at least 2n−1, which is impossible. Thus y2 = ±y1, so that, from (17), yn−1 1 | n. It follows that |y1| = |y2| = 1, contradicting (14). Thus d = 1, whence y1 −y2 = 2 and yn−1 1 + yn−2 1 y2 + · · · + yn−1 2 = qk. (18) Since the polynomial Xn−1 + Xn−2 + · · · + 1 has a root modulo q, the Dedekind–Kummer theorem tells us that q splits in Z[ζn] and so q ≡1 (mod n). We therefore have the following. Proposition 3.1. If x, y, q, k and n are positive integers satisfying (13) with n and q prime, then n | (q−1) and there exists an odd positive integer X such that y = X(X + 2) and (X + 2)n −Xn = 2qk. (19) This last result makes it an extremely straightforward matter to solve (7) in the case y is odd. Lemma 3.2. The only solutions to (13) with 3 ≤q < 1000 prime correspond to the identities 762 −74 = 153, 1222 −114 = 35. 142 −132 = 33, 1757842 −974 = 31353, 2342 −1092 = 353, 5362 −1932 = 633, 17642 −4332 = 1433, 41442 −7692 = 2553. Proof. Suppose first that n = 3, where (19) becomes 3(X + 1)2 + 1 = qk. (20) From [Cohn 1997; 2003], we know that the equation 3u2 + 1 = ym has no solutions with m ≥3. We conclude that k = 1 or 2. Solving (20) with k = 1 or 2 and 3 ≤q < 1000 leads to the seven solutions with n = 3. We now suppose that n ≥5 is prime. By a theorem of Bennett and Skinner [2004, Theorem 2], the only solutions to the equation Xn + Y n = 2Z2 with n ≥5 prime and gcd(X, Y) = 1 are with either |XY| = 1 or (n, X, Y, Z) = (5, 3, −1, ±11). We note that if k is even then (19) can be rewritten as (X + 2)n −Xn = 2(qk/2)2, and therefore n = 5, X = 1 and qk/2 = 11. This yields the solution 1222 −114 = 35. 1798 Michael A. Bennett and Samir Siksek We may therefore suppose that k is odd. Recalling that n | (q −1) leaves us with precisely 191 pairs (q, n) to consider, ranging from (11, 5) to (997, 83). Fix one of these pairs (q, n) and let ℓ∤nq be an odd prime. Let Zℓbe the set of β ∈Z/(ℓ−1)Z such that β is odd and the polynomial (X + 2)n −Xn −2qβ has a root in Fℓ. We note that the value of qk modulo ℓdepends only on the residue class of k modulo ℓ−1. From (19), we deduce that (k mod ℓ) ∈Zℓ. Now let ℓ1, ℓ2, . . . , ℓm be a collection of odd primes with ℓi ∤nq for 1 ≤i ≤m. Let M = lcm(ℓ1 −1, ℓ2 −1, . . . , ℓm −1) (21) and set Zℓ1,...,ℓm = {β ∈Z/MZ : (β mod ℓi) ∈Zℓi for i = 1, . . . , m}. (22) It is clear that (k mod M) ∈Zℓ1,...,ℓm. We wrote a short Magma script which, for each pair (q, n), computed Zℓ1,...,ℓm where ℓ1, ℓ2, . . . , ℓm are the odd primes ≤101 distinct from n and q. In all 191 cases we found that Zℓ1,...,ℓm = ∅, completing the desired contradiction. □ 4. Lucas sequences and the primitive divisor theorem The primitive divisor theorem of Bilu, Hanrot and Voutier [Bilu et al. 2001] shall be our main tool for treating (8) when δ = 0 and y is odd. In this section, we state this result and a related theorem of Carmichael that shall be useful later. A pair of algebraic integers (γ, δ) is called a Lucas pair if γ + δ and γ δ are nonzero coprime rational integers, and γ/δ is not a root of unity. We say that two Lucas pairs (γ1, δ1) and (γ2, δ2) are equivalent if γ1/γ2 = ±1 and δ1/δ2 = ±1. Given a Lucas pair (γ, δ) we define the corresponding Lucas sequence by Lm = γ m −δm γ −δ , m = 0, 1, 2, . . . . A prime ℓis said to be a primitive divisor of the m-th term if ℓdivides Lm but ℓdoes not divide (γ −δ)2 · L1L2 · · · Lm−1. Theorem 6 [Bilu et al. 2001]. Let (γ, δ) be a Lucas pair and write {Lm} for the corresponding Lucas sequence. (i) If m ≥30, then Lm has a primitive divisor. (ii) If m ≥11 is prime, then Lm has a primitive divisor. (iii) L7 has a primitive divisor unless (γ, δ) is equivalent to ((a − √ b)/2, (a + √ b)/2) where (a, b) ∈{(1, −7), (1, −19)}. (23) (iv) L5 has a primitive divisor unless (γ, δ) is equivalent to ((a − √ b)/2, (a + √ b)/2) where (a, b) ∈{(1, 5), (1, −7), (2, −40), (1, −11), (1, −15), (12, −76), (12, −1364)}. (24) Differences between perfect powers: prime power gaps 1799 Let ℓbe a prime. We define the rank of apparition of ℓin the Lucas sequence {Lm} to be the smallest positive integer m such that ℓ| Lm. We denote the rank of apparition of ℓby mℓ. The following theorem will be useful for us; a concise proof may be found in [Bennett et al. 2022, Theorem 8]. Theorem 7 [Carmichael 1913]. Let (γ, δ) be a Lucas pair, and {Lm} the corresponding Lucas sequence. Let ℓbe a prime. (i) If ℓ| γ δ then ℓ∤Lm for all positive integers m. (ii) Suppose ℓ∤γ δ. Write D = (γ −δ)2 ∈Z. (a) If ℓ̸= 2 and ℓ| D, then mℓ= ℓ. (b) If ℓ̸= 2 and D ℓ  = 1, then mℓ| (ℓ−1). (c) If ℓ̸= 2 and D ℓ  = −1, then mℓ| (ℓ+ 1). (d) If ℓ= 2, then mℓ= 2 or 3. (iii) If ℓ∤γ δ then ℓ| Lm ⇐ ⇒mℓ| m. 5. The equation x2 + q2k = yn: the proof of Theorem 4 In this section, we prove Theorem 4 with the help of the primitive divisor theorem. We are concerned with the equation x2 + q2k = yn, x, k, n positive integers, n ≥3, gcd(x, y) = 1. (25) Here q ≥3 is a prime. Considering this equation modulo 8 immediately tells us that y is odd and x is even. Without loss of generality, we may suppose that 4 | n or that n is divisible by an odd prime. Lemma 5.1. Solutions to (25) with 4 | n and odd prime q satisfy k = 1, q2 = 2yn/2 −1 and x = (q2 −1)/2. In particular, the only solutions to (25) with 4 | n and prime 3 ≤q < 1000 correspond to the identities 242 + 72 = 54, 8402 + 412 = 294 and 285602 + 2392 = 138 = 1694. Proof. Suppose that 4 | n. Then (yn/2 + x)(yn/2 −x) = q2k, and so yn/2 + x = q2k and yn/2 −x = 1. Thus 2yn/2 = q2k + 1. By Theorem 1 of [Bennett and Skinner 2004], the only solutions to the equation Ar + Br = 2C2 with r ≥4, ABC ̸= 0 and gcd(A, B) = 1 are with |AB| = 1 or (r, A, B, C) = (5, 3, −1, ±11). It follows that the equation 2yn/2 = q2k + 1 has no solutions with k ≥2 and 4 | n. Therefore k = 1, and hence q2 = 2yn/2 −1. The only primes in the range 3 ≤q < 1000, such that q2 = 2yn/2 −1 with 4 | n, are q = 7, 41 and 239, which lead to the solutions stated in the lemma. □ 1800 Michael A. Bennett and Samir Siksek Henceforth, we will suppose that n is an odd prime. Thus x +qki = αn, where we can write α = a +bi, for a and b coprime integers with y = a2 + b2. Subtracting this equation from its conjugate yields qk = b · αn −αn α −α . (26) Lemma 5.2. Solutions to (25) with n = 3 and odd prime q must satisfy (i) either q = 3 and (k, x, y) = (2, 46, 13); (ii) or q = 3a2 −1 for some positive integer a and (k, x, y) = (1, a3 −3a, a2 + 1); (iii) or q2 = 3a2 + 1 for some positive integer a and (k, x, y) = (1, 8a3 + 3a, 4a2 + 1). In particular, the only solutions to (25) with n = 3 and prime 3 ≤q < 1000 correspond to the identities 462 + 34 = 133, 5242 + 72 = 653, 22 + 112 = 53, 522 + 472 = 173, 14050962 + 972 = 125453, 1982 + 1072 = 373, 4882 + 1912 = 653, 16922 + 4312 = 1453, 27022 + 5872 = 1973, 57782 + 9712 = 3253. Proof. Let n = 3. Thanks to Table 1, we know that the only solution with q = 3 is the one given in (i). We may thus suppose that q ≥5. Equation (26) gives qk = b(3a2 −b2). By the coprimality of a and b, we have b = ±1 or b = ±qk. We note that b = −1 gives qk = 1 −3a2 which is impossible. Also if b = qk then 3a2 −q2k = 1 which is impossible modulo 3. Thus either b = 1 or b = −qk. If b = 1, then qk = 3a2 −1, and if b = −qk then q2k = 3a2 + 1. From Theorem 1.1 of [Bennett and Skinner 2004], these equations have no solutions in positive integers if k ≥4 or k ≥2, respectively. If k =3, the elliptic curve corresponding to the first equation has Mordell–Weil rank 0 over Q and it is straightforward to show that the equation has no integer solutions. We therefore have that k = 1 in either case. Thus q = 3a2 −1 or q2 = 3a2 +1, and these yield the parametric solutions in (ii) and (iii). For 5 ≤q < 1000, the primes q of the form 3a2 −1 are 11, 47, 107, 191, 431, 587, 971. For 5 ≤q < 1000, the primes q satisfying q2 = 3a2 +1 are q = 7 and 97. These yield the solutions given in the statement of the lemma. □ We expect that there are infinitely many primes q of the form 3a2 −1, but are very unsure about the number of primes q satisfying q2 = 3a2 +1 (the only ones known are 7, 97 and 708158977). Quantifying such results, in any case, is well beyond current technology. Differences between perfect powers: prime power gaps 1801 In view of Lemma 5.2, we henceforth suppose that n is ≥5 and prime. The following lemma now completes the proof of Theorem 4. Lemma 5.3. Let (k, x, y, n) be a solution to (25) with prime n ≥5 and odd prime q. Then k is odd, n | (q −1) if q ≡1 (mod 4), n | (q + 1) if q ≡3 (mod 4), (27) and there is an integer a such that y = a2 + 1, x = (a + i)n + (a −i)2 2 , (a + i)n −(a −i)n 2i = ±qk. In particular, the only solutions to (25) with prime 3 ≤q < 1000 and prime n ≥5 correspond to the identities 382 + 412 = 55, 2782 + 292 = 57. Proof. Suppose n is ≥5 and prime in (25). By Theorem 1 of [Bennett et al. 2010], the equation A4 + B2 = Cm has no solutions satisfying gcd(A, B) = 1, AB ̸= 0 and m ≥4. We conclude that k is odd. We note that (α, α) is a Lucas pair and write {Lm} for the corresponding Lucas sequence. By Theorem 6, Ln must have a primitive divisor, and from (26) this primitive divisor is q. In particular, q does not divide D = (α −α)2 = −4b2. Thus by (26) we have b = ±1 and D = −4. Moreover, the rank of apparition of q in the sequence is n. By Theorem 7, we have n | (q −1) if q ≡1 (mod 4) and n | (q +1) if q ≡3 (mod 4). We now let q be a prime in the range 3 ≤q < 1000. There are 168 pairs (q, n) with q in this range and n a prime ≥5 satisfying (27), ranging from (19, 5) to (997, 83). For each of these pairs (q, n), and each sign η = ±1, we need to consider the equation (a + i)n −(a −i)n 2i = η · qk, (28) where k is an odd integer. We shall follow the sieving approach of Lemma 3.2 to eliminate all but two of the possible 2 × 168 = 336 triples (q, n, η). Fix such a triple (q, n, η). Let fn ∈Z[X] be the polynomial fn(X) = (X + i)n −(X −i)n 2i . Let ℓ∤nq be an odd prime, and let Zℓbe the set β ∈Z/(ℓ−1)Z such that β is odd and fn(X)−η·qβ has a root in Fℓ. It follows that (k (mod ℓ)) ∈Zℓ. Now let ℓ1, ℓ2, . . . , ℓm be a collection of odd primes ∤qn. Define M and Zℓ1,...,ℓm by (21) and (22), respectively. It is clear that (k (mod M)) ∈Zℓ1,...,ℓm. We wrote a short Magma script which, for each triple (q, n, η), computed Zℓ1,...,ℓm where ℓ1 . . . , ℓm are the odd primes < 150 distinct from n and q. In all but two of the 336 cases we found that Zℓ1,...,ℓm = ∅. The two exceptions are (q, n, η) = (41, 5, 1) and (29, 7, −1), and so these are the only two cases we need to consider. Let Fn(X, Y) = (X + iY)n −(X −iY)n 2iY . 1802 Michael A. Bennett and Samir Siksek This is a homogeneous degree n −1 polynomial belonging to Z[X, Y]. Now (28) can be written as Fn(a, 1) = η · qk. Thus it is sufficient to solve the Thue–Mahler equations Fn(X, Y) = η · qk for (q, n, η) = (41, 5, 1) and (29, 7, −1). Explicitly these equations are 5X4 −10X2Y 2 + Y 4 = 41k (29) and 7X6 −35X4Y 2 + 21X2Y 4 −Y 6 = −29k. (30) Using the Magma implementation of the Thue–Mahler solver described in [Gherga and Siksek 2022], we find that the solutions to (29) are (X, Y, k) = (±2, ±1, 1) and (0, ±1, 0), and that the solutions to (30) are also (X, Y, k) = (±2, ±1, 1) and (0, ±1, 0). These lead to the two solutions stated in the lemma. □ 6. The equation x2 −q2k = yn with y even: reduction to Thue–Mahler equations Section 3 dealt with (7) in the case that y is odd, using purely elementary means. We now turn our attention to (7) with y even, and consider the equation x2 −q2k = yn, x, k, n positive integers, n ≥3, gcd(x, y) = 1, y an even integer. (31) Here q ≥3 is a prime and, without loss of generality, n = 4 or n is an odd prime. Lemma 6.1. Write γ = 1 + √ 2. Any solution to (31) with n = 4 and q an odd prime must satisfy k = 1, q = γ 2m + γ −2m 2 , x = γ 4m + 6 + γ −4m 8 and y = γ 2m −γ −2m 2 √ 2 , (32) for some integer m. In particular, the only solutions with 3 ≤q < 1000 correspond to the identities 52 −32 = (±2)4, 1452 −172 = (±12)4 and 1664652 −5772 = (±408)4. Proof. Suppose n = 4. Then (x + y2)(x −y2) = q2k, and so, by the coprimality of x and y, x + y2 = q2k and x −y2 = 1, or equivalently x = q2k + 1 2 and q2k −2y2 = 1. (33) First we show that k = 1. From the second equation, we have (qk + 1)(qk −1) = 2y2. Since the greatest common divisor of the two factors on the left is 2 we see that one of the two factors must be a perfect square, i.e., qk + 1 = z2 or qk −1 = z2 for some nonzero integer z, and it is easy to see that k must be odd. The impossibility of these cases for k ≥3 follows from Mih˘ ailescu’s theorem (Catalan’s conjecture). Hence k = 1. The second equation in (33) implies that q + y √ 2 is a totally positive unit in Z[ √ 2]. Thus q + y √ 2 = γ 2m and q −y √ 2 = γ −2m, (34) Differences between perfect powers: prime power gaps 1803 for some integer m. The formulae for q and y in (32) follow from this, and the formula for x follows from the first relation in (33). We focus on primes 3 ≤q < 1000. From the first relation in (34), |m| < log(2q) 2 log γ < log 2000 2 log(1 + √ 2) < 5. Thus −4 ≤m ≤4. The values m = ±1, ±2, ±4, respectively, give the three solutions in the statement of the lemma. If m = 0 or ±3, then we obtain q = 1 or 99 which are not prime. □ In view of Lemma 6.1, we may henceforth suppose that n ≥3 is odd. Let x′ be either x or −x, chosen so that x′ ≡qk (mod 4). From (31), we deduce the existence of relatively prime integers y1 and y2 for which x′ + qk = 2yn 1 and x′ −qk = 2n−1yn 2, (35) with y = 2y1y2, so that we have yn 1 −2n−2yn 2 = qk. (36) We have thus reduced the resolution of (31) for particular q and n to solving a degree n Thue–Mahler equation. Lemma 6.2. The only solutions to (31) with n ∈{3, 5} and 3 ≤q < 1000 an odd prime correspond to the identities 532 −32 = 403, 12 −32 = (−2)3, 72 −34 = (−2)5, 292 −54 = 63, 92 −72 = 25, 432 −112 = 123, 4992 −132 = 125, 152 −172 = −43, 3972 −174 = 423, 252 −292 = (−6)3, 112 −432 = (−12)3, 14152 −432 = 1263, 300429072 −438 = 962223, 5000472 −472 = 63003, 552 −532 = 63, 1792 −712 = 303, 47852 −714 = (−136)3, 7752 −892 = 843, 1552 −1012 = 243, 136092 −1092 = 5703, 1412 −1092 = 203, 1292 −1272 = 83, 1232 −1272 = (−10)3, 27892 −1272 = 1983, 712 −1792 = (−30)3, 41972 −1972 = 2603, 2772 −2232 = 303, 2492 −2512 = (−10)3, 2352 −2512 = (−6)5, 2552 −2572 = −45, 1189012 −2632 = 24183, 2232 −2772 = (−30)3, 23552 −3072 = 1763, 1430272 −3074 = 22623, 5572 −3072 = 603, 3272 −3592 = (−28)3, 41466892 −3834 = 258003, 2892 −3972 = (−42)3, 4332 −4312 = 123, 4312 −4332 = (−12)3, 43086932 −4334 = 264623, 9792 −4792 = 903, 132 −4992 = (−12)5, 238312 −5032 = 8283, 3072 −5572 = (−60)3, 932 −5932 = (−70)3, 8572 −6012 = 723, 7132 −6592 = 423, 850164152 −6832 = 1933463, 2970532 −7012 = 44523, 7312 −7272 = 183, 36552 −7392 = 2343, 5612 −8112 = (−70)3, 6012 −8572 = (−72)3, 10512 −9972 = 483. Proof. For n ∈{3, 5} and primes 3 ≤q < 1000, we solved the Thue–Mahler equation (36) using the Magma implementation of the Thue–Mahler solver described in [Gherga and Siksek 2022]. The computation resulted in the solutions given in the statement of the lemma. □ 1804 Michael A. Bennett and Samir Siksek 7. The modular approach to Diophantine equations: some background Let F/Q be an elliptic curve over the rationals of conductor NF and minimal discriminant 1F. Let p ≥5 be a prime. The action of Gal(Q/Q) on the p-torsion F[p] gives rise to a 2-dimensional mod p representation ¯ ρF,p : Gal(Q/Q) →GL2(Fp). Suppose ¯ ρF,p is irreducible (that is, F does not have an p-isogeny); this can often be established by appealing to Mazur’s isogeny theorem . A standard consequence of Ribet’s lowering theorem , building on the modularity of elliptic curves over Q due to Wiles and others [Wiles 1995; Breuil et al. 2001], is that ¯ ρF,p arises from a weight-2 newform of level N = NF . Y ℓ∥NF p|ordℓ(1F) ℓ. More precisely, there is a newform f of weight 2 and level N with normalized q-expansion f = q + ∞ X m=2 cmqm (37) such that ¯ ρF,p ∼¯ ρ f,p, (38) where p is a prime ideal above p of the ring of integers O f of the Hecke eigenfield K f = Q(c1, c2, . . . ). The original motivation for the great theorems of Ribet and Wiles included Fermat’s last theorem. To motivate what is to come in later sections, we quickly sketch the deduction of FLT from the above. Let x, y and z be nonzero coprime rational integers satisfying x p + y p + z p = 0 where p ≥5 is prime. After appropriately permuting x, y and z, we may suppose that 2 | y and that xn ≡−1 (mod 4). Let F be the Frey–Hellegouarch curve Y 2 = X(X −x p)(X + y p). It follows from Mazur’s isogeny theorem and related results that ¯ ρE,p is irreducible. A short computation reveals that 1F = 2−8(xyz)2p and NF = 2 Rad(xyz), where Rad(m) denotes the product of the prime divisors of m. We find that N = 2. Thus ¯ ρF,p arises from a newform f of weight 2 and level 2; the nonexistence of such newforms provides the desired contradiction. It is possible to use a similar strategy to treat various Diophantine problems including generalized Fermat equations Ax p + Byq = Czr, for certain signatures (p, q,r). This is done by Kraus for signature (p, p, p) and by Bennett and Skinner for signature (p, p, 2). Fortunately, these papers provide recipes for the Frey–Hellegouarch curves F and for the levels N, and establish the required Differences between perfect powers: prime power gaps 1805 irreducibility of ¯ ρF,n. We shall make frequent use of these recipes in later sections. It is known (and easily checked using standard dimension formulae) that there are no weight-2 newforms at levels 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 12, 13, 16, 18, 22, 25, 28, 60, (39) but there are newforms at all other levels. Thus, if the level N predicted by the recipes is not in the list (39) then we do not immediately obtain a contradiction. Instead, we may compute the possible newforms using implementations (for example, in Magma or SAGE) of modular symbols algorithms due to Cremona and Stein . We then use the relation (38) to help us extract information about the solutions to our Diophantine equation. In doing this, we shall often make use of the following standard result; see, for example, [Kraus and Oesterlé 1992; Siksek 2012, Section 5]. Lemma 7.1. Let F/Q be an elliptic curve of conductor NF. Let f be a weight-2 newform of level N having q-expansion as in (37). Suppose (38) holds for some prime p ≥5. Let ℓ̸= p be a rational prime. (i) If ℓ∤NF N then aℓ(F) ≡cℓ(mod p). (ii) If ℓ∤N but ℓ|| NF then ℓ+ 1 ≡±cℓ(mod p). If f is a rational newform (i.e., K f = Q) then (i), (ii) also hold for ℓ= p. We will also make frequent use of the following theorem. Theorem 8 [Kraus 1997, Proposition 2]. Let f be a newform of weight 2 and level N with q-expansion as in (37), and Hecke eigenfield K f with ring of integers O f . Write M = lcm(4, N) and µ(M) = M · Y r|M r prime  1 + 1 r  . Let p be a prime ideal of O f and suppose the following two conditions hold. (i) For all primes ℓ≤µ(M)/6, ℓ∤2N, we have ℓ+ 1 ≡cℓ(mod p). (ii) For all primes ℓ≤µ(M)/6, ℓ| 2N, ℓ2 ∤4N, we have (ℓ+ 1)(cℓ−1) ≡0 (mod p). Then ℓ+ 1 ≡cℓ(mod p) for all primes ℓ∤2N. 8. The equation x2 −q2k = yn with y even: an approach via Frey curves We are still concerned with (31). In view of the results of Section 6, we may suppose that n ≥7 is prime. To show that (31) has no solutions for a particular pair (q, n), it is enough to show the same for (36). We shall think of (36) as a Fermat equation of signature (n, n, n) by writing it as yn 1 −2n−2yn 2 = qk · 1n. This enables us to apply recipes of Kraus for Frey–Hellegouarch curves and level lowering. The following lemma will eliminate some cases when applying those recipes. 1806 Michael A. Bennett and Samir Siksek Lemma 8.1. Suppose n ≥7 is prime. Then gcd(k, 2n) = 1. Proof. Theorem 1.2 of [Bennett and Skinner 2004] asserts that the equation Ap + 2α B p = C2 with prime p ≥7 has no solutions in nonzero integers with gcd(A, B, C) = 1 and α ≥2. It immediately follows from (36) that k is odd. Moreover, Theorem 3 of [Ribet 1997] asserts that the equation Ap + 2α B p + C p = 0 has no solutions with ABC ̸= 0 for prime p ≥7 and 2 ≤α ≤p −1. It follows that n ∤k. □ Following Kraus, we attach to a solution of (36) a Frey–Hellegouarch curve F, where F : Y 2 = X(X + yn 1)(X + 2n−2yn 2) (40) if q ≡1 (mod 4), and F : Y 2 = X(X −qk)(X + 2n−2yn 2), (41) if q ≡3 (mod 4). The Frey–Hellegouarch curve F is semistable, and has minimal discriminant and conductor, respectively, given by 1F = 22n−12q2k(y1y2)n and NF = 2q · Rad2(y1y2), (42) where Rad2(m) denotes the product of the odd primes dividing m. From Kraus , the mod n representation of F arises from a newform f of weight 2 and level N = 2q. Let ℓ∤2q be a prime. Write T = {a ∈Z ∩[−2 √ ℓ, 2 √ ℓ] : a ≡ℓ+ 1 (mod 4)}. Let D′ f,ℓ= ((ℓ+ 1)2 −c2 ℓ) · Y a∈T (a −cℓ), and D f,ℓ= ℓ· D′ f,ℓ if K f ̸= Q, D′ f,ℓ if K f = Q, Lemma 8.2. Let f be a newform of weight 2 and level 2q, and suppose that (38) holds. Let ℓ∤2q be a prime. Then n | D f,ℓ. Proof. If ℓ∤y1y2, then ℓ∤NF and so is a prime of good reduction for F. As F has full 2-torsion we deduce that 4 | (ℓ+ 1 −aℓ(F)). By the Hasse–Weil bounds, aℓ(F) belongs to the set T . If ℓ| y1y2, then ℓ|| NF. The lemma now follows from Lemma 7.1. □ It is straightforward from Lemma 8.2 and the fact that n | n that n | Norm(D f,ℓ). Thus if D f,ℓ̸= 0, we immediately obtain an upper bound upon the exponent n. This approach will result in a bound on the exponent n in (31) unless f corresponds to an elliptic curve over Q with full 2-torsion and conductor N = 2q; for this see [Siksek 2012, Section 9]. Mazur showed that such an elliptic curve exists if and only if q ≥31 is a Fermat or a Mersenne prime; see, for example, [Siksek 2012, Theorem 8]. We note that 31, 127 and 257 are the only such primes in our range 3 ≤q < 1000. We shall exploit this approach to prove the following. Differences between perfect powers: prime power gaps 1807 Proposition 8.3. Let n ≥7 and 3 ≤q < 1000 be primes. (i) If q ̸∈{31, 127, 257}, then (31) has no solutions. (ii) Suppose q ∈{31, 127, 257}, write q = 2m + η where η = ±1, and let Eq : Y 2 = X(X + 1)(X −η · 2m). (43) Suppose (k, x, y) is a solution to (31) and let F be as above. Then ¯ ρF,n ∼¯ ρEq,n. In Cremona’s notation, these Eq are the elliptic curves 62a2, 254d2 and 514a2, for q = 31, 127 and 257, respectively. Proof. There are no newforms of weight 2 and levels 6, 10 and 22. Therefore the proof is complete in the cases where q ∈{3, 5, 11}. We may thus suppose that 7 ≤q < 1000 is prime and that q ̸= 11. For a newform f of weight 2 and level 2q, and a collection of primes ℓ1, . . . , ℓm (all coprime to 2q), we write D f,ℓ1,...,ℓm for the ideal of O f generated by D f,ℓ1, . . . , D f,ℓm. Let B f,ℓ1,...,ℓm ∈Z be the norm of the ideal D f,ℓ1,...,ℓm. If ¯ ρF,n ∼¯ ρ f,n, then n | D f,ℓ1,...,ℓm by Lemma 8.2. As n | n, we deduce that n | B f,ℓ1,...,ℓm. In our computations we will take ℓ1, . . . , ℓm to be all the primes < 200 distinct from 2 and q, and write B f for B f,ℓ1,...,ℓm. We wrote a short Magma script which computed, for all newforms f at all levels 2q under consideration, the integer B f . We found that B f ̸= 0 for all newforms f except for three rational newforms of levels 62, 254 and 514 (corresponding to q = 31, 127 and 257, respectively). Thus, for all other newforms, we at least obtain a bound on n. In many cases this bound is already sharp enough to contradict our assumption that n ≥7. We give a few examples. Let q = 13. Then there are two eigenforms f1, f2 of level 2q = 26, and B f1 = 3 × 5, B f2 = 3 × 7. Thus we eliminate f1 from consideration, and also conclude that n = 7. It is natural to wonder if n = 7 can be eliminated by increasing the size of our set of primes ℓ1, . . . , ℓm, but this is not the case. The newform f2 is rational and corresponds to the elliptic curve 26b1 with Weierstrass model E′ : Y 2 + XY + Y = X3 −X2 −3X + 3. The torsion subgroup of E′(Q) is isomorphic to Z/7Z, generated by the point (1, 0). In particular, for any prime ℓ∤26, we have 7 | (ℓ+ 1 −aℓ(E′)). Since aℓ(E′) = cℓ( f2), we have 7 | B f2,ℓ. Thus 7 | B f,ℓ1,...,ℓn regardless of the set of primes ℓ1, . . . , ℓm that we choose. However we can still obtain a contradiction for n = 7 in this case. Indeed, we have ¯ ρF,7 ∼¯ ρ f2,7 ∼¯ ρE′,7. Since E′ has nontrivial 7-torsion, the representation ¯ ρE′,7 is reducible. However, the representation of the Frey curve ¯ ρF,7 is irreducible as shown by Kraus [1997, Lemme 4], contradicting the fact that F has full rational 2-torsion. 1808 Michael A. Bennett and Samir Siksek For q = 31, there are two newforms, g1 and g2. We find that Bg1 = 0 and Bg2 = 23 × 32; thus we may eliminate g2 for consideration. The eigenform g1 is rational and corresponds to the elliptic curve E31 with Cremona label 62a2. Hence ¯ ρF,p ∼¯ ρg1,p ∼¯ ρE31,p, whence the proof is complete for q = 31. For q = 37, there are two newforms, h1 and h2. We find that Bh1 = 33 and Bh2 = 19. Thus n = 19 and ¯ ρF,19 ∼¯ ρh2,19. (44) The newform h2 has q-expansion h2 = q + q2 + αq3 + q4 + (−3α −1)q5 + αq6 + 2αq7 + · · · , where α = −1 + √ 5 2 , and Hecke eigenfield K = Q( √ 5). Let n be the prime ideal n = (4 −α) · OK having norm 19. We checked, using Theorem 8, that ℓ+1 ≡cℓ(mod n) for all primes ℓ∤2·37, where cℓis the ℓ-th coefficient of h2. From relation (44), we know that aℓ(F) ≡cℓ(mod n) for all primes ℓof good reduction for F3,1. Thus 19 | (ℓ+1−aℓ(F3,1)) for all primes ℓof good reduction. As before, this now implies that ¯ ρF,19 is reducible [Serre 1975, IV-6], giving a contradiction. The proof is thus complete for q = 37. The above arguments allow us to prove (ii) in the statement of the proposition, and to obtain a contradiction for all 3 ≤q < 1000, q ̸∈{31, 127, 257}, except when n = 7 and q belongs to the list 43, 101, 103, 139, 163, 379, 467, 509, 557, 569, 839, 937, 947, 977. For n = 7 and these values of q, we checked using the aforementioned Thue–Mahler solver that the only solutions to (36) are (y1, y2, k) = (1, 0, 0). Since k ̸= 0 in (31), the proof is complete. □ Symplectic criteria. When q ≥31 is a Fermat or Mersenne prime, it does not seem to be possible, working purely with Galois representations of elliptic curves, to eliminate the possibility that ¯ ρF,n ∼¯ ρEq,n. However, the so-called ‘symplectic method’ of Halberstadt and Kraus allows us to derive an additional restriction on the solutions to (31). Lemma 8.4. Let q = 2m +η be a Fermat or Mersenne prime. Let n ≥7 be a prime ̸= q. Suppose (x, y, k) is a solution to (31), and let F be the Frey–Hellegouarch curve constructed above, and Eq be given by (43). Suppose ¯ ρF,n ∼¯ ρEq,n. Then either n | (m −4) or (24 −6m)k n  = 1. (45) Proof. We note that the curves F and Eq have multiplicative reduction at both 2 and q. Write 11 and 12 for the minimal discriminants of F and Eq, respectively. By [Halberstadt and Kraus 2002, Lemme 1.6], the ratio ord2(11) · ordq(11) ord2(12) · ordq(12) Differences between perfect powers: prime power gaps 1809 is a square modulo n, provided n ∤ord2(1i), n ∤ordq(1i). It is in invoking this result of Halberstadt and Kraus that we require the assumption that n ̸= q. We find that 11 = 22n−12q2k(y1y2)2n and 12 = 22m−8q2. We have previously noted that n ∤k by appealing to a result of Ribet. Suppose n ∤(m −4). Then the valuations ord2(1i) and ordq(1i) are all indivisible by n. The result follows. □ 9. The equation x2 −q2k = yn: an upper bound for the exponent n To help us complete the proof of Theorem 5, we begin by deriving an upper bound for n. Our approach is essentially a minor sharpening of Theorem 3 of [Bugeaud 1997] in a slightly special case. Since this result is valid for an arbitrary prime q, it may be of independent interest. Theorem 9. Let x, y, q, k ≥1 and n ≥3 be integers satisfying (7), with n and q prime, and q ∤x. Then n < 1000 q log q. Proof. If q = 2, then we have that n ≤5 from Theorem 1.2 of [Bennett and Skinner 2004]. We may thus suppose that q is odd and, additionally, that y is even, or, via Proposition 3.1, we immediately obtain the much stronger result that n | (q −1). We are therefore in case (35). By Proposition 8.3, we may suppose that q = 31 or that q ≥127. Set Y = max{|y1|, |2y2|} and suppose first that qk ≥Y n/2, (46) or equivalently 2k log q ≥n log Y. (47) We set 3 = qk (2y2)n =  y1 2y2  n −1 4; we wish to apply an upper bound for linear forms in q-adic logarithms to 3, in order to bound k. To do this, we must first treat the case where y1/2y2 and 1 4 are multiplicatively dependent, i.e., where y1y2 has no odd prime divisors. Under this assumption, since y1 is odd, we find from (36) that 2 j ± 1 = qk, for an integer j with j ≡−2 (mod n). Via Mih˘ ailescu’s theorem , if n ≥7, necessarily k = 1, y1 = ±1, y2 = −2κ for some integer κ and q = 2(κ+1)n−2 ± 1. In this case, we find a solution to (7) corresponding to the identity (−q ± 2)2 −q2 = 4 ∓4q = (∓2κ+1)n, whereby, certainly n < 1000 q log q. 1810 Michael A. Bennett and Samir Siksek Otherwise, we may suppose that y1/2y2 and 1 4 are multiplicatively independent and that Y ≥3. We will appeal to Théorème 4 of Bugeaud and Laurent , with, in the notation of that result, (µ, ν) = (10, 5) (see also Proposition 1 of Bugeaud ). Before we state this result, we require some notation. Let Qq denote an algebraic closure of the q-adic field Qq, and define νq to be the unique extension to Qq of the standard q-adic valuation over Qq, normalized so that νq(q) = 1. For any algebraic number α of degree d over Q, define the absolute logarithmic height of α via the formula h(α) = 1 d  log |a0| + d X i=1 log max(1, |α(i)|)  , (48) where a0 is the leading coefficient of the minimal polynomial of α over Z and the α(i) are the conjugates of α in C. Theorem 10 (Bugeaud–Laurent). Let q be a prime number and let α1, α2 denote algebraic numbers which are q-adic units. Let f be the residual degree of the extension Qq(α1, α2)/Qq and put D = [Qq(α1, α2) : Qq] f . Let b1 and b2 be positive integers and put 31 = αb1 1 −αb2 2 . Denote by A1 > 1 and A2 > 1 real numbers such that log Ai ≥max n h(αi), log q D o , i ∈{1, 2}, and put b′ = b1 D log A2 + b2 D log A1 . If α1 and α2 are multiplicatively independent, then we have the bound νq(31) ≤24q(q f −1) (q−1) log4 q D4 max n log b′ + log log q + 0.4, 10 log q D , 5 o 2 · log A1 · log A2. We apply this with f = 1, D = 1, α1 = y1 2y2 , α2 = 1 4, b1 = n, b2 = 1, so that we may choose log A1 = max{log Y, log q}, log A2 = max{2 log 2, log q}, and b′ = n log A2 + 1 log A1 . Let us assume now that n ≥1000 q log q, (49) Differences between perfect powers: prime power gaps 1811 whilst recalling that either q = 31 or q ≥127. We therefore have b′ < 1.001 n log q and hence find that k ≤24 q log3 q (max{log n + 0.401, 10 log q})2 log A1, (50) whence, from (47), n log Y ≤48 q log2 q (max{log n + 0.401, 10 log q})2 log A1. (51) Let us suppose first that log n + 0.401 ≥10 log q. If q ≥Y, we have that log A1 = log q and hence n log Y (log n + 0.401)2 ≤48 q log q . From (49), we thus have log2 q (log(1000 q log q) + 0.401)2 ≤0.048 log Y ≤0.048 log 3 , contradicting q ≥31. If, on the other hand, q < Y, then log A1 = log Y and so n (log n + 0.401)2 ≤48 q log2 q . (52) With (49), this implies that log3 q < 0.048(log(1000 q log q) + 0.401)2, again contradicting q ≥31. We may therefore assume that log n + 0.401 < 10 log q, so that n log Y ≤4800 q log A1. If q ≥Y, then, from (49), log Y < 4.8, whereby 3 ≤Y ≤121. If |y1| ≥2|y2|, it follows from (36) that qk ≥|y1|n −1 4 |y1|n = 3 4Y n. (53) Suppose, conversely, that |y1| ≤2|y2| −1 (so that 1 ≤|y2| ≤60). If y1 > 0 and y2 < 0, it follows from (36) that qk > 1 4Y n. (54) 1812 Michael A. Bennett and Samir Siksek We may thus suppose that y1 and y2 have the same sign, whence, from (36), (49) and |y2| ≤60, qk = 2n−2|y2|n −|y1|n > 0.24 · |2y2|n = 0.24 · Y n. (55) Combining (53), (54) and (55), we thus have from (50) that n log Y + log 0.24 < k log q ≤2400 q log q, contradicting (49) and q ≥31. If q < Y, then, via (49), 1000 q log q ≤n ≤4800 q, (56) a contradiction for q ≥127. We may thus suppose that q = 31, Y > 31 and, from (52), n ≤12119, which contradicts (49). Next suppose that inequality (46) (and hence also inequality (47)) fails to hold. In this case, we will apply lower bounds for linear forms in two complex logarithms. Following Bugeaud, we take 31 = 43 = 4qk (2y2)n = 4  y1 2y2  n −1, so that log |31| = 2 log 2 + k log q −n log |2y2|. (57) If Y = max{|y1|, |2y2|} = |y1|, then, from (35), it follows that qk ≥3 4 |y1|n = 3 4Y n, contradicting qk < Y n/2. It follows that Y = |2y2| and so, from (57), log |31| = 2 log 2 + k log q −n log Y ≤2 log 2 −n 2 log Y. (58) From (49), we have that |31| ≤ 1 2000, so that n log 2y2 y1 −2 log 2 ≤| log(1 −31)| ≤1.001 |31|. (59) We will appeal to the following. Theorem 11 [Laurent 2008, Corollary 1]. Consider the linear form 3 = c2 log β2 −c1 log β1, where c1 and c2 are positive integers, and β1 and β2 are multiplicatively independent algebraic numbers. Define D = [Q(β1, β2) : Q]/[R(β1, β2) : R] and set b′ = c1 D log B2 + c2 D log B1 , where B1, B2 > 1 are real numbers such that log Bi ≥max  h(βi), |log βi| D , 1 D  , i ∈{1, 2}. Differences between perfect powers: prime power gaps 1813 Then log |3| ≥−CD4  max  log b′ + 0.21, m D , 1  2 log B1 log B2, for each pair (m, C) in the following set  (10, 32.3), (12, 29.9), (14, 28.2), (16, 26.9), (18, 26.0), (20, 25.2), (22, 24.5), (24, 24.0), (26, 23.5), (28, 23.1), (30, 22.8) . Applying this result to the left-hand side of (59), with (m, C) = (10, 32.3), β2 = 2y2 y1 , β1 = 4, c2 = n, c1 = 1, D = 1, log B2 = log Y, log B1 = 2 log 2 and b′ = n 2 log 2 + 1 log Y < 1.001n 2 log 2 , we may conclude that log |31| ≥−0.001 −44.8(max{log n −0.11, 10})2 log Y. Combining this with (58), we thus have n ≤89.6(max{log n −0.11, 10})2 + 1.4 log Y . After a little work we find that n ≤8961, contradicting (49) and q ≥31. □ 10. The equation x2 −q2k = yn: proof of Theorem 5 In this section, we complete the proof of Theorem 5. Let 3 ≤q < 1000 be a prime and let (k, x, y, n) be a solution to (7) where x, k ≥1 and n ≥3 are positive integers satisfying q ∤x. Thanks to Lemmata 3.2, 6.1 and 6.2, we may suppose that y is even and that n ≥7 is prime. It follows from Proposition 8.3 that q = 31, 127 or 257 and ¯ ρF,n ∼¯ ρEq,n, where Eq is given in (43), and F is the Frey–Hellegouarch curve given in (40) or (41) according to whether q ≡1 or 3 (mod 4). From Theorem 9, we have n < 1000 × 257 × log 257 < 1.5 × 106. We now give a method, which for a given exponent n and prime q ∈{31, 127, 257}, is capable of showing that (36) has no solutions. This is an adaptation of the method called ‘predicting the exponents of constants’ in [Siksek 2012, Section 13]. Let n ≥7 be prime and choose ℓ̸= q to be a prime satisfying (i) ℓ= tn + 1 for some positive integer t; (ii) n ∤((ℓ+ 1)2 −aℓ(Eq)2). 1814 Michael A. Bennett and Samir Siksek For κ ∈Fℓ, κ ̸∈{0, 1}, set E(κ) : Y 2 = X(X −1)(X −κ). Let g be a primitive root for ℓ(that is, a generator for F∗ ℓ) and let h = gn. Define Xℓ⊂F∗ ℓvia Xℓ=  1 4hr : 0 ≤r ≤t −1 and hr ̸≡4 (mod ℓ) and Yℓ= {(κ −1) · (F∗ ℓ)n : κ ∈Xℓand aℓ(E(κ))2 ≡aℓ(Eq)2 (mod n)} ⊂F∗ ℓ/(F∗ ℓ)n. Define further φ : Z/nZ →F∗ ℓ/(F∗ ℓ)n via φ(s) = qs · (F∗ ℓ)n. Finally, let Zℓ=  s ∈φ−1(Yℓ) : (24 −6m)s n  = 1  , where q = 2m ± 1; thus m = 5, 7 and 8 for q = 31, 127 and 257, respectively. We note that n ∤(m −4) in all cases, so that (45) holds. Lemma 10.1. Let q ∈{31, 127, 257} and n ≥7, n ̸= q be prime. Let ℓ1, . . . , ℓt be primes ̸= q satisfying (i) and (ii) above, and also t \ i=1 Zℓi = ∅. (60) Then (7) has no solutions with k ≥1 and q ∤x. Proof. From Proposition 8.3, ¯ ρF,n ∼¯ ρEq,n. The minimal discriminant and conductor of F are given in (42). Thus a prime ℓ∤2q satisfies ℓ|| NF if and only if ℓ| y1y2, otherwise ℓ∤NF. Let ℓ̸= q be a prime satisfying (i) and (ii). By (ii) we know, thanks to Lemma 7.1, that ℓ∤y1y2, and so aℓ(F) ≡aℓ(Eq) (mod n). Let κ ∈Fℓsatisfy κ ≡2n−2yn 2 yn 1 (mod ℓ). Then E(κ)/Fℓis a quadratic twist of F/Fℓand so aℓ(E(κ)) = ±aℓ(F). We conclude that aℓ(E(κ))2 ≡ aℓ(Eq)2 (mod n). Recall that ℓ= tn + 1 and h = gn, where g is a primitive root of Fℓ. Observe that 4κ ≡2nyn 2 yn 1 ≡hr (mod ℓ), for some 0 ≤r ≤t −1. Moreover, κ −1 ≡2n−2yn 2 yn 1 −1 ≡−qk yn 1 ̸≡0 (mod ℓ). Differences between perfect powers: prime power gaps 1815 In particular, κ ̸= 1 and so κ ∈Xℓand qk · (F∗ ℓ)n = (κ −1) · (F∗ ℓ)n ∈Yℓ. Hence s ∈φ−1(Yℓ), where s = ¯ k ∈Z/nZ. Since k also satisfies (45), we conclude that s ∈Zℓ. As this is true for ℓ= ℓ1, . . . , ℓt, the element s belongs to the intersection (60) giving a contradiction. □ Corollary 10.2. For q ∈{31, 127, 257} and prime n with 7 ≤n < 1.5×106, equation (7) has no solutions with k ≥1 and q ∤x. Proof. For n ̸= q, we ran a short Magma script that searches for suitable primes ℓi and verifies the criterion of Lemma 10.1. This succeeded for all the primes 7 ≤n < 1.5 × 106 in a few minutes, except for (q, n) = (31, 7). In this case, we found that T Zℓi = {¯ 1} no matter how many primes ℓi we chose. The reason for this is that there is a solution to (36) with n = 7 and k = 1, namely (−1)7 −25 · (−1)7 = 311. In the case n = q, we are unable to appeal directly to Lemma 10.1 as we no longer necessarily have (45). We can, however, still derive a slightly weaker analogue of Lemma 10.1 with the Zℓreplaced by the (typically) larger sets Z′ ℓ= φ−1(Yℓ). For n = q, we find that Z′ 311 ∩Z′ 373 = ∅, Z′ 509 ∩Z′ 2287 = ∅ and Z′ 1543 = ∅, for q = 31, 127 and 257, respectively. □ To complete the proof of Theorem 5, it remains only to solve the Thue–Mahler equation y7 1 −32y7 2 = 31k. Using the Magma implementation of [Gherga and Siksek 2022], we find that the only solution with k positive is with k = 1 and y1 = y2 = −1, corresponding to the solution (q, k, y, n) = (31, 1, 2, 7) to (7). 11. The equation x2 + q2k+1 = yn with y odd In previous sections, we have completed the proofs of Theorems 4 and 5, therefore solving (8) with 3 ≤q < 1000 prime, for even exponents α. The remainder of the paper is devoted to solving (8) for odd exponents α, and for the more modest range 3 ≤q < 100. In this section, we focus on the equation x2 + q2k+1 = yn, x, y, k integers, k ≥0, gcd(x, y) = 1, y odd, (61) with exponent n ≥5 prime; here q ≥3 is prime. Theorem 12 (Arif and Abu Muriefah). Suppose q ≥3 and n ≥5 are prime, and that n does not divide the class number of Q(√−q). Then the only solution to (61) corresponds to the identity 224342 + 19 = 555. (62) Proof. The proof given by Arif and Abu Muriefah is somewhat lengthy and slightly incorrect. For the convenience of the reader we give a corrected and simplified proof. Let M = Q(√−q) and 1816 Michael A. Bennett and Samir Siksek suppose that n does not divide the class number of M. This and the assumptions in (61) quickly lead us to conclude that x + qk√−q = αn for some α ∈OM with Norm(α) = y. Thus αn −αn = 2qk√−q. (63) If α/α is a root of unity, then by the coprimality of α and α, we can conclude that α is a unit and so y = 1 giving a contradiction. Thus α/α is not a root of unity. Therefore um = αm −αm α −α is a Lucas sequence. Since αα = y, we note that αα is coprime to 2q. Suppose that the term un has a primitive divisor ℓ. By definition, this is a prime ℓdividing un that does not divide (α−α)2·u1u2 · · · un−1. However α = u+v√−q or α = (u+v√−q)/2 where u, v ∈Z. Thus (α−α)2 = −4q or −q, respectively. In particular ℓ̸= q. It follows from (26) that ℓ= 2. By Theorem 7 and the primality of n, we have n = m2, the rank of apparition of ℓ= 2 in the sequence un. Again by Theorem 7, n = m2 = 2 or 3 contradicting our assumption that n ≥5. It follows that un does not have a primitive divisor. We now invoke the primitive divisor theorem (Theorem 6) to conclude that n = 5 or 7 and that (α, α) is equivalent to ((a− √ b)/2, (a+ √ b)/2) where possibilities for (a, b) are given by (24) if n =5, and by (23) if n =7. For illustration, we take n =5 and (a, b)=(12, −76). Thus α =(±12± √ −76)/2=±6± √ −19, whence q = 19 and y = Norm(α) = 55, quickly giving the solution in (62). The other possibilities for (a, b) in (23) and (24) do not yield solutions to (61). □ Corollary 11.1. The only solutions to (61) with 3 ≤q < 100 and n ≥5 prime correspond to the identities 224342 + 19 = 555, 142 + 47 = 35 and 462 + 71 = 37. Proof. Write hq for the class number of M = Q(√−q). Thanks to Theorem 12, if n ∤hq then the only corresponding solution is 224342 + 19 = 555. Thus we may suppose that n | hq. The only values of q in our range with hq divisible by a prime ≥5 are q = 47, 71 and 79, where hq = 5, 7 and 5, respectively. We therefore reduce to considering the three cases (q, n) = (47, 5), (71, 7) and (79, 5), with hq = n in all three cases. From (61), we have (x + qk√q) · OM = An. If A is principal, then we are in the situation of the proof of Theorem 12 and we obtain a contradiction. Thus A is not principal. Now for the three quadratic fields under consideration the class group is generated by the class [P] where P = 2 · OM + (1 + √−q) 2 · OM Differences between perfect powers: prime power gaps 1817 is one of the two prime ideals dividing 2. We conclude that [A] = [P]−r for some 1 ≤r ≤n −1. Observe that CC is principal for any ideal C of OM, so [C] = [C]−1. We choose B = A or A so that [B] = [P]−r where 1 ≤r ≤(n −1)/2. We note that (x ± qk√−q) · OM = Bn = (P−n)r · (PrB)n, where the ± sign is + if B = A and −if B = A. We note that both P−n and PrB are principal. We find that P−n = 2−n−1(u + v√−q) · OM where u, v are given by (u, v) =    (−9, 1) if q = 47, (−21, 1) if q = 71, (7, 1) if q = 79. The ideal PrB is integral as well as principal, and so has the form (X′ + Y ′√−q) · OM where X′ and Y ′ are either both integers, or both halves of odd integers. We conclude that 2s+rn+r(x ± qk√−q) = (u + v√−q)r · (X + Y√−q)n, where X, Y ∈Z and s = 0 or n. Equating imaginary parts gives Gr(X, Y) = ±2s+rn+rqk, where Gr ∈Z[X, Y] is a homogeneous polynomial of degree n. We solved this Thue–Mahler equation using the Thue–Mahler solver associated to the paper [Gherga and Siksek 2022], for each of our three pairs (q, n) and each 0 ≤r ≤(n −1)/2. For illustration, we consider the case q = 47, n = 5, r = 2. Thus (u, v) = (−9, 1). We find G2(X, Y) = 2(−9X5 + 85X4Y + 4230X3Y 2 −7990X2Y 3 −99405XY 4 + 37553Y 5) and are therefore led to solve the Thue–Mahler equation −9X5 + 85X4Y + 4230X3Y 2 −7990X2Y 3 −99405XY 4 + 37553Y 5 = ±2 jqk. We find that the solutions are (X, Y, j, k) = (1, 1, 16, 0) and (−1, −1, 16, 0), and compute G2(1, 1) = −217, G2(−1, −1) = 217. We note that 17 = n + rn + r; therefore s = n = 5. We deduce that x ± 47k√ −47 = ±(−9 + √ −47)2 · (1 + √ −47)5 = ±(14 − √ −47). Thus x = ±14 and k = 0, giving the solution 142 + 47 = 35. The other cases are similar. □ 1818 Michael A. Bennett and Samir Siksek 12. The equation x2 + (−1)δq2k+1 = y5 We will soon apply Frey–Hellegouarch curves to study the equation x2 + (−1)δq2k+1 = yn for prime exponents n ≥7, and for q a prime in the range 3 ≤q < 100. In Section 2, we have solved this equation for n ∈{3, 4}. This leaves only the exponent n = 5 which we now treat through reduction to Thue–Mahler equations. Lemma 12.1. Let 3 ≤q < 100 be a prime. The only solutions to the equation x2 −q2k+1 = y5, x, y, k integers, k ≥0, gcd(x, y) = 1, correspond to the identities 22 −3 = 15, 22 −5 = (−1)5, 102 −73 = (−3)5, 562 −11 = 55, 162 −13 = 35, 42 −17 = (−1)5, 72 −17 = 25, 62 −37 = (−1)5, 37882 −37 = 275, 32 −41 = (−2)5, 4112 −413 = 105, 112 −89 = 25. Proof. Let M = Q(√q). For q in our range, the class number of M is 1, unless q = 79 in which case the class number is 3. Suppose first that y is odd. Then (x + qk√q)OM = A5, where A is an ideal of OM. Since the class number is not divisible by 5, we see that A is principal and conclude that x + qk√q = ϵr · α5, (64) where ϵ is some fixed choice of a fundamental unit for M, −2 ≤r ≤2, and α ∈OM. Note that −x + qk√q = ϵ−r · β5, where β is one of ±α. Thus we may, without loss of generality, suppose that 0 ≤r ≤2. The case r = 0 is easily shown not to lead to any solutions by following the approach in the proof of Theorem 12. Thus we suppose r = 1 or 2. Let θ = √q if q ≡3 (mod 4), (1 + √q)/2 if q ≡1 (mod 4). Then {1, θ} is a Z-basis for OM and so we may write α = X + Yθ where X, Y ∈Z. It follows that ϵr · α5 = Fr(X, Y) + Gr(X, Y)θ, where Fr, Gr are homogeneous degree-5 polynomials in Z[X, Y]. Equating the coefficients of θ in (64) yields the Thue–Mahler equations Gr(X, Y) = qk if q ≡3 (mod 4), 2qk if q ≡1 (mod 4). Differences between perfect powers: prime power gaps 1819 Solving these equations for prime 3 ≤q < 100 and for r ∈{1, 2} leads to the solutions given in the statement of the theorem with y odd. Next we consider the case when y is even, so that q ≡1 (mod 8). The possible values of q in our range are 17, 41, 73, 89 and 97 (where, in each case, M has class number 1). We can rewrite the equation x2 −q2k+1 = y5 as x + qk√q 2 x −qk√q 2  = 23y5 1, where y1 = y/2. The two factors on the left-hand side are coprime. Let β be a generator of P = 2OM + 1 + √q 2  · OM which is one of the two prime ideals above 2. After possibly replacing x by −x we obtain x −qk 2 + qkθ = x + qk√q 2 = ϵrβα5, where −2 ≤r ≤2. Writing α = X + Yθ and equating the coefficients of θ on both sides gives, for each choice of q and r, a Thue–Mahler equation. Solving these leads to the solutions in the statement of the theorem with y even. □ Lemma 12.2. Let 3 ≤q < 100 be a prime. The only solutions to the equation x2 + q2k+1 = y5, x, y, k integers, k ≥0, gcd(x, y) = 1, correspond to the identities 52+7 = 25, 1812+7 = 85, 224342+19 = 555, 32+23 = 25, 12+31 = 25 and 142+47 = 35. Proof. By Corollary 11.1 we know that the only solutions when y is odd correspond to the identities 224342 +19 = 555 and 142 +47 = 35. Thus we may suppose y is even, and write y = 2y1. It follows that q = 7, 23, 31, 47, 71, 79. Let M = Q(√−q). Let θ = (1 + √−q)/2, so that 1, θ is a Z-basis for OM. Observe that x + qk√−q 2 x −qk√−q 2  = 23y5 1, where the two factors on the left-hand side generate coprime ideals. Let P = 2OM + θ · OM; this is one of the two primes above 2. Thus, after possibly changing the sign of x, x + qk√−q 2  · OM = P3 · A5 for some ideal A of OM. The class number of OM is 1, 3, 3, 5, 7, 5 according to whether q = 7, 23, 31, 47, 71, 79. In all cases the class group is cyclic and generated by [P]. If q = 47 or 79 then the class 1820 Michael A. Bennett and Samir Siksek number is 5, and so A5 is principal. Hence P3 is principal which is a contradiction. Thus there are no solutions for q = 47 or 79. Let C = 1 · OM, q = 7, 23, 31, P2, q = 71. Note that P3C−5 is principal and we write P3C−5 = (u + vθ) · OM. Thus x + qk√−q 2  · OM = (u + vθ) · (CA)5. As the class number is coprime to 5, we see that CA is principal. Write CA = (X + Yθ) · OK. After possibly changing the signs of X, Y, we have x −qk 2 + qkθ = x + qk√−q 2 = (u + vθ)(X + Yθ)5. Comparing the coefficients of θ yields a degree-5 Thue–Mahler equation. Solving these Thue–Mahler equations as before gives the claimed solutions with y even. □ 13. Frey–Hellegouarch curves for a ternary equation of signature (n, n, 2) In studying (7), we employed a factorization argument which reduced to (36) (which in turn we treated as a special case of a Fermat equation having signature (n, n, n)). In the remainder of the paper, we are primarily interested in the equation x2 + (−1)δq2k+1 = yn, where q is a prime. We shall treat this, for prime n ≥7, as a Fermat equation of signature (n, n, 2) by rewriting this as yn + q2k+1(−1)(δ+1)n = x2, a special case of yn + qαzn = x2, gcd(x, y) = 1. (65) Equation (65) has previously been studied by Ivorra and Kraus , and by Bennett and Skinner . In this section, we recall some of these results and strengthen them slightly before specialising them to the case z = ±1 in forthcoming sections. Theorem 13 (Ivorra and Kraus). Suppose that q is a prime with the property that q cannot be written in the form q = |t2 ± 2k|, where t and k are integers, with k = 0, k = 3 or k ≥7. Then there are no solutions to the Diophantine equation (65) in integers x, y, z, n and α with n prime satisfying n > ( p 8(q + 1) + 1)2(q−1). (66) To verify whether or not a given prime q can be written as |t2 −2k|, an old result of Bauer and Bennett can be helpful. We have, from Corollary 1.7 of [Bauer and Bennett 2002], if t and k are positive integers with k ≥3 odd, |t2 −2k| > 213k/50, Differences between perfect powers: prime power gaps 1821 unless (t, k) ∈{(3, 3), (181, 15)}. In particular, a short computation reveals that Theorem 13 is applicable to the following primes q < 100: q ∈{11, 13, 19, 29, 43, 53, 59, 61, 67, 83}. (67) We shall make Theorem 13 more precise for these particular values of q. To this end we attach to a solution of (65) a certain Frey–Hellegouarch curve, following the recipes of Bennett and Skinner. If yz is even in (65), then we define, assuming, without loss of generality, that x ≡1 (mod 4), F : Y 2 + XY = X3 + x −1 4  X2 + yn 64 X, if y is even, (68) and F : Y 2 + XY = X3 + x −1 4  X2 + qαzn 64 X, if z is even. If, on the other hand, yz is odd, we define F : Y 2 = X3 + 2x X2 + qαzn X (69) or F : Y 2 = X3 + 2x X2 + yn X, (70) depending on whether y ≡1 (mod 4) or y ≡−1 (mod 4), respectively. Let κ = 1 if yz is even, 5 if yz is odd. (71) By the results of [Bennett and Skinner 2004], in each case, we may suppose that n ∤α and that the mod n representation of F arises from a newform f of weight 2 and level N = 2κ · q. Let the q-expansion of f be given by (37). As before, we denote the Hecke eigenfield by K f = Q(c1, c2, . . . ) and its ring of integers by O f . In particular, there is a prime ideal n of O f such that (38) holds. Let ℓ∤2q be prime and T = {a ∈Z ∩[−2 √ ℓ, 2 √ ℓ] : a ≡0 (mod 2)}. We write D′ f,ℓ= ((ℓ+ 1)2 −c2 ℓ) · Y a∈T (a −cℓ), and D f,ℓ= ℓ· D′ f,ℓ if K f ̸= Q, D′ f,ℓ if K f = Q. Lemma 13.1. Let f be a newform of weight 2 and level N = 2κ · q. Let ℓ∤2q be a prime. If ¯ ρF,n ∼¯ ρ f,n then n | D f,ℓ. Proof. The proof is almost identical to the proof of Lemma 8.2. The only difference is the definition of T which takes into account the fact F has a single rational point of order 2 instead of full 2-torsion. □ 1822 Michael A. Bennett and Samir Siksek The following is a slight refinement of Theorem 1.3 of [Bennett and Skinner 2004]. Proposition 13.2. Suppose that q belongs to (67). Then there are no solutions to (65) in integers x, y, z, n and α with gcd(x, y) = 1 and n ≥7 prime, except, possibly, n = 7 and q ∈{29, 43, 53, 59, 61}, or one of the following holds: • q = 11, n = 7 and yz ≡1 (mod 2), or • q = 19, n = 7 and yz ≡1 (mod 2), or • q = 43, n = 11 and yz ≡1 (mod 2), or • q = 53, n = 17 and yz ≡1 (mod 2), or • q = 59, n = 11 and yz ≡0 (mod 2), or • q = 61, n = 13 and yz ≡1 (mod 2), or • q = 67, n ∈{7, 11, 13, 17} and yz ≡1 (mod 2), or • q = 83, n = 7 and yz ≡1 (mod 2). Proof. For a weight-2 newform f of level N and primes ℓ1, . . . , ℓm (all coprime to 2q), write D f,ℓ1,...,ℓm for the ideal of O f generated by D f,ℓ1, . . . , D f,ℓm. Let B f,ℓ1,...,ℓm ∈Z be the norm of the ideal D f,ℓ1,...,ℓm. If ¯ ρF,n ∼¯ ρ f,n then n | D f,ℓ1,...,ℓm by Lemma 13.1. Write B f,ℓ1,...,ℓm = Norm(D f,ℓ1,...,ℓm). Thus n | B f,ℓ1,...,ℓm. In our computations, we take ℓ1, . . . , ℓm to be the primes < 100 coprime to 2q, and we let B f = B f,ℓ1,...,ℓm. If B f ̸= 0, then we certainly have a bound on n. If B f is divisible only by primes ≤5, then we know that (38) does not hold for that particular f , and we can eliminate it from further consideration. For primes q in (67), we apply this with newforms f of levels N = 2κq, κ ∈{1, 5}. We obtain the desired conclusion that (65) has no solutions provided n ≥7 is prime, unless q ∈{29, 43, 53, 59, 61} and n = 7, or (q, n, κ) is one of (11, 7, 5), (13, 7, 1), (19, 7, 5), (43, 11, 1), (43, 11, 5), (53, 17, 5), (59, 11, 1), (61, 31, 1), (61, 13, 5), (67, 17, 1), (67, 7, 5), (67, 11, 5), (67, 13, 5), (67, 17, 5), (83, 7, 1), (83, 7, 5). We show that the triples (13, 7, 1), (43, 11, 1), (61, 31, 1), (67, 17, 1) and (83, 7, 1) do not have corresponding solutions; the remaining triples lead to the noted possible exceptions. For illustration, take q = 83 and κ = 1, so that N = 2 × 83 = 166. There are three conjugacy classes of weight-2 newforms of level N, which we denote by f1, f2, f3, which respectively have Hecke eigenfields Q, Q( √ 5) and Q(θ) where θ3 −θ2 −6θ + 4 = 0. We find B f1 = 32 × 5, B f2 = 5, B f3 = 7. We therefore deduce that f = f3 and n = 7. In fact, D f = (7, 3 + θ) is a prime ideal above 7, so we take n = (7, 3 + θ). A short calculation verifies the congruences in hypotheses (i) and (ii) of Theorem 8, whence ℓ+ 1 ≡cℓ(mod n) for all ℓwith ℓ∤2 · 83. It follows from Lemma 7.1 that aℓ(F) ≡cℓ(mod n) Differences between perfect powers: prime power gaps 1823 for all primes ℓof good reduction for F and hence 7 | (ℓ+ 1 −aℓ(F)) for all such primes ℓof good reduction. This now implies that ¯ ρF,7 is reducible [Serre 1975, IV-6], giving a contradiction. We argue similarly for (q, n, κ) = (13, 7, 1), (43, 11, 1), (61, 31, 1), (67, 17, 1). In each case, Lemma 13.1 eliminates all but one class of newforms which are then treated via Theorem 8. □ For other odd primes q < 100, outside the set (67), we can, in certain cases, still show that (65) has no nontrivial solutions for suitably large n, under the additional assumption that yz ≡0 (mod 2) or, for other q, under the assumption that yz ≡1 (mod 2). To be precise, we have the following two propositions. Proposition 13.3. Suppose that q ∈{3, 5, 37, 73}. Then there are no solutions to (65) in integers x, y, z, n and α with yz ≡0 (mod 2), gcd(x, y) = 1 and n ≥7 prime, except, possibly, (q, n) = (73, 7). Proposition 13.4. Suppose that q ∈{23, 31, 47, 71, 79, 97}. Then there are no solutions to (65) in integers x, y, z, n and α with yz ≡1 (mod 2), gcd(x, y) = 1 and n ≥7 prime, except, possibly, n = 7 and q ∈{23, 31, 47, 71, 97}, or (q, n) = (79, 11), or (q, n) = (97, 29). As in the case of Proposition 13.2, these results follow after a small amount of computation, by applying Lemma 13.1 and Theorem 8. 14. The equation x2 ± q2k+1 = yn and proofs of Theorems 2 and 3 We now specialize and improve on the results of Section 13, proving the following. Proposition 14.1. Let (x, y, k) be a solution to the equation x2 + (−1)δq2k+1 = yn, δ ∈{0, 1}, k ≥0, gcd(x, y) = 1, (72) where q is a prime in the range 3 ≤q < 100, and n ≥7 is prime. Suppose, in addition, that (a) if y is odd then δ = 1; (b) if δ = 1 then q ̸∈{3, 5, 17, 37}. If y is even, suppose, without loss of generality, that x ≡1 (mod 4). Write κ = 1 if y is even, 5 if y is odd. (73) Let v ∈{0, 1} satisfy k ≡v (mod 2). Attach to the solution (x, y, k) the Frey–Hellegouarch curve G = Gx,k :    Y 2 = X3 + 4x X2 + 4(x2 + (−1)δq2k+1)X if κ = 1, Y 2 = X3 −4x X2 + 4(x2 + (−1)δq2k+1)X if κ = 5 and q ≡(−1)δ mod 4, Y 2 = X3 + 2x X2 + (x2 + (−1)δq2k+1)X if κ = 5 and q ≡(−1)δ+1 mod 4. 1824 Michael A. Bennett and Samir Siksek q δ κ v E 7 0 1 0 14a1 7 0 1 1 14a1 23 0 1 0 46a1 31 0 1 0 62a1 31 0 1 1 62a1 41 1 1 0 82a1 41 1 5 0 1312a1, 1312b1 41 1 5 1 1312a1, 1312b1 q δ κ v E 47 0 1 0 94a1 71 0 1 0 142c1 71 0 1 1 142c1 73 1 5 0 2336a1, 2336b1 73 1 5 1 2336a1, 2336b1 79 0 1 0 158e1 89 1 1 0 178b1 97 1 1 0 194a1 Table 3. Data for Proposition 14.1. Here the elliptic curves E are given by their Cremona labels. Then either n > 1000 and ¯ ρG,n ∼¯ ρE,n where E/Q is an elliptic curve of conductor 2κq given in Table 3 or the solution (x, y, k) corresponds to one of the identities 112 + 7 = 27, 452 + 23 = 211, 132 −41 = 27, 92 + 47 = 27, 72 + 79 = 27, 912 −89 = 213, 152 −97 = 27. Before proving this result, we make a few remarks on the assumptions in Proposition 14.1. Our eventual goal is to prove Theorems 1, 2 and 3, and thus we are interested in the equation x2 +(−1)δqα = yn where 3 ≤q < 100. Theorems 4 and 5 (proved in Sections 5 and 10, respectively) treat the case where α is even, so we are reduced to α = 2k + 1. The results of Section 2, Corollary 11.1 and Lemmas 12.1, 12.2 allow us to restrict the exponent n to be a prime ≥7. Thanks to Theorem 12, we need not consider the case where δ = 0 and y is odd, which explains the reason for assumption (a). With a view to proving the proposition, we will soon provide a method which is usually capable, for a fixed q, δ and n, of showing that (72) does not have a solution. If δ = 1, and q is one of the values 3, 5, 17 or 37, then there is a solution to (72) for all odd values of the exponent n: 22 −3 = 1n, 22 −5 = (−1)n, 42 −17 = (−1)n, 62 −37 = (−1)n; and so our method fails if δ = 1 and q is one of these four values. This explains assumption (b) in the statement of the proposition. We note that (72) is a special case of (65) with z specialised to the value (−1)δ+1, and with α = 2k +1. The value κ in the statement of the proposition agrees with value for κ in (71) given in the previous section. We note that if y is odd, then y ≡(−1)δ · q (mod 4). The Frey–Hellegouarch curve G is, up to isogeny, the same as the Frey–Hellegouarch curve F in the previous section, but is more convenient for our purposes. More precisely, the model G is isomorphic to F given in (68) if y even (i.e., κ = 1), and to F given in (70) if y ≡3 (mod 4) (i.e., κ = 5 and q ≡(−1)δ+1 mod 4). It is 2-isogenous to F in (69) if y ≡1 (mod 4) (i.e., κ = 5 and q ≡(−1)δ mod 4). Thus ¯ ρF,n ∼¯ ρG,n in all three cases. We conclude from the previous section that ¯ ρG,n ∼¯ ρ f,n where f is a weight-2 newform of level N = 2κq. Differences between perfect powers: prime power gaps 1825 Note that if κ = 1 (that is, y is even) then 1+(−1)δq ≡0 (mod 8). This together with the assumptions of Proposition 14.1 shows that we are concerned with 30 possibilities for the triple (q, δ, κ), namely (7, 0, 1), (7, 1, 5), (11, 1, 5), (13, 1, 5), (19, 1, 5), (23, 0, 1), (23, 1, 5), (29, 1, 5), (31, 0, 1), (31, 1, 5), (41, 1, 1), (41, 1, 5), (43, 1, 5), (47, 0, 1), (47, 1, 5), (53, 1, 5), (59, 1, 5), (61, 1, 5), (67, 1, 5), (71, 0, 1), (71, 1, 5), (73, 1, 1), (73, 1, 5), (79, 0, 1), (79, 1, 5), (83, 1, 5), (89, 1, 1), (89, 1, 5), (97, 1, 1), (97, 1, 5). (74) Bounding the exponent n. In the previous section we defined an ideal D f,ℓ1,...,ℓr which if nonzero allows us to bound the exponent n in (65). That bound will also be valid for (72) since it is a special case of (65). We now offer a refinement that is often capable of yielding a better bound for (72). Fix a triple (q, δ, κ) from the above list. We also fix v ∈{0, 1} and suppose that k ≡v (mod 2). Let f be a weight-2 newform of level N = 2κq with q-expansion as in (37). Write K f for the Hecke eigenfield of f , and O f for the ring of integers of K f . For a prime ℓ̸= 2, q, define Sℓ= {aℓ(Gw,v) : w ∈Fℓ, w2 + (−1)δq2v+1 ̸≡0 (mod ℓ)}. Let T = Tℓ= Sℓ∪{ℓ+ 1, −ℓ−1} if (−1)δ+1q is a square modulo ℓ, Sℓ otherwise. Let E′ ℓ= Y a∈T (a −cℓ) and Eℓ= ℓ· E′ ℓ if K f ̸= Q, E′ ℓ if K f = Q, where, as before, cℓis the ℓ-th coefficient in the q-expansion of f . Lemma 14.2. Let n be a prime ideal of O f above n. If ¯ ρG,n ∼¯ ρ f,n then n | Eℓ. Proof. Write k = 2u + v with u ∈Z. Let w ∈Fℓsatisfy w ≡x/q2u (mod ℓ). Hence yn = x2 + (−1)δq2k+1 ≡q4u · (w2 + (−1)δq2v+1) (mod ℓ). It follows that ℓ| y if and only if w2 + (−1)δq2v+1 (mod ℓ). Suppose first that ℓ∤y. The elliptic curves Gx,k/Fℓand Gw,v/Fℓare isomorphic, and so aℓ(Gx,k) = aℓ(Gw,v). In particular, aℓ(Gx,k) ∈Tℓand so aℓ(Gx,k) −cℓdivides Eℓ. Likewise, if ℓ| y (which can only happen if (−1)δ+1q is a square modulo ℓ) then (ℓ+ 1)2 −c2 ℓdivides Eℓ. The lemma follows from Lemma 7.1. □ A sieve. Lemma 14.2 will soon allow us to eliminate most possibilities for the newform f in a manner similar to Propositions 13.2, 13.3 and 13.4. We will still need to treat some cases for fixed exponent n. To this end, we will employ a sieving technique similar to the one in Section 10. Fix a prime n ≥7, and let n be a prime ideal of O f above n. Let ℓ̸= q be a prime. Suppose (i) ℓ= tn + 1 for some positive integer t; (ii) either n ∤(4 −c2 ℓ), or (−1)δ+1q is not a square modulo ℓ. 1826 Michael A. Bennett and Samir Siksek Let A = {m ∈{0, 1, . . . , 2n −1} : m ≡v (mod 2), n ∤(2m + 1)}, Xℓ= {(z, m) ∈Fℓ× A : (z2 + (−1)δq2m+1)t ≡1 (mod ℓ)}, Yℓ= {(z, m) ∈Xℓ: aℓ(Gz,m) ≡cℓ(mod n)}, Zℓ= {m : there exists z such that (z, m) ∈Yℓ}. Lemma 14.3. Let ℓ1, . . . , ℓr be primes ̸= q satisfying (i), (ii). Let Zℓ1,...,ℓr = r \ i=1 Zℓi. If ¯ ρG,n ∼¯ ρ f,n then (k mod 2n) ∈Zℓ1,...,ℓr. Proof. Let m be the unique element of {0, 1, . . . , 2n −1} satisfying k ≡m (mod 2n). Let ℓ̸= q be a prime satisfying (i) and (ii). It is sufficient to show that m ∈Zℓ. First we will demonstrate that ℓ∤y. If (−1)δ+1q is not a square modulo ℓthen ℓ∤y from (72). Otherwise, by (ii), n ∤(4 −c2 ℓ). However, from (i) and the fact that n | n we have ℓ+ 1 ≡2 (mod n) and so n ∤((ℓ+ 1)2 −c2 ℓ). It follows from Lemma 7.1 that ℓis a prime of good reduction for Gx,k and so ℓ∤y. We deduce from Lemma 7.1 that aℓ(Gx,k) ≡cℓ(mod n). In the previous section, we observed that n ∤α in (65) thanks to the results of [Bennett and Skinner 2004], whence n ∤(2k + 1). Since k ≡v (mod 2), we know that m ∈A. Write k = 2nb + m with b a nonnegative integer and let z ∈Fℓsatisfy z ≡x/q2nb (mod ℓ). Then z2 + (−1)δq2m+1 ≡ 1 q4nb (x2 + (−1)δq2k+1) ≡  y q4b  n (mod ℓ). From (i), we deduce that (z2 + (−1)δq2m+1)t ≡  y q4b  ℓ−1 ≡1 (mod ℓ). Thus (z, m) ∈Xℓ. Moreover, we have that Gx,k/Fℓand Gz,m/Fℓare isomorphic elliptic curves, whence aℓ(Gz,m) = aℓ(Gx,k) ≡cℓ(mod n). Thus (z, m) ∈Yℓand so m ∈Zℓas required. □ Remarks. We would like to explain how to compute Zℓefficiently, given n and ℓ. (1) In our computations, the value t will be relatively small compared to n and to ℓ= tn + 1. Let g be a primitive root modulo ℓ(that is, a cyclic generator for F× ℓ), and let h = gn. The set Xℓconsists of pairs (z, m) ∈Fℓ× A such that (z2 + (−1)δqm)t ≡1 (mod ℓ). Hence z2 + (−1)δqm is one of the values 1, h, h2, . . . , ht−1. Thus, to compute Xℓ, we run through i = 0, 1, . . . , t −1 and m ∈A and solve z2 = hi −(−1)δqm. We note that the expected cardinality of Xℓshould be roughly t × #A ≈t × n ≈ℓ. (2) It seems at first that, in order to compute Yℓand Zℓ, we need to compute aℓ(Gz,m) for all (z, m) ∈Xℓ, and this might be an issue for large ℓ. There is in fact a shortcut that often means that we only need to Differences between perfect powers: prime power gaps 1827 perform a few of these computations. In fact we will need to compute Zℓfor large values of ℓonly for rational newforms f that correspond to elliptic curves E/Q with nontrivial 2-torsion. In this case, we note that aℓ(Gz,m) ≡aℓ(E) (mod 2), as both elliptic curves have nontrivial 2-torsion. If (z, m) ∈Yℓ, then aℓ(Gz,m) ≡aℓ(E) (mod 2n). However, by the Hasse–Weil bounds, |aℓ(Gz,m) −aℓ(E)| ≤4 √ ℓ. Suppose, in addition, that n2 > 4ℓ(which will be usually satisfied as t is typically small). Then, the congruence aℓ(Gz,m) ≡cℓ= aℓ(E) (mod 2n) is equivalent to the equality aℓ(Gz,m) = aℓ(E), and so to #Gz,m(Fℓ) = #E(Fℓ). To check whether the equality #Gz,m(Fℓ) = #E(Fℓ) holds for a particular pair (z, m) ∈Xℓ, we first choose a random point Q ∈Gz,m(Fℓ) and check whether #E(Fℓ) · Q = 0. Only for pairs (z, m) ∈Xℓthat pass this test do we need to compute aℓ(Gz,m) and check the congruence aℓ(Gz,m) ≡aℓ(E) (mod n). A refined sieve. We note that if Zℓ1,...,ℓr = ∅then ¯ ρG,n ≁¯ ρ f,n. In our computations, described later, we are always able to find suitable primes ℓ1, . . . , ℓr satisfying (i), (ii), so that Zℓ1,...,ℓr = ∅, at least for n suitably large. For smaller values of n (say less than 50), we occasionally failed. We now describe a refined sieving method that, whilst being somewhat slow, has a better chance of succeeding for those smaller values of the exponent n. Let (q, δ, κ) be one of our 30 triples given in (74), and let n ≥7 be a prime. Suppose that (x, y, k) is a solution to (72) where y is even if and only if κ = 1. Let φ = p (−1)δ+1q and set M = Q(φ). Let P be one of the prime ideals of OM above 2. Our first goal is to produce a finite set S ⊂M∗, such that x + qkφ = γ · αn (75) for some γ ∈S and α ∈OM. This is the objective of Lemmata 14.4 and 14.5. Both of these make an additional assumption on the class group, but this assumption will in fact be satisfied in all cases where we need to apply our refined sieve. Lemma 14.4. Let κ = 5. Suppose that the class group Cl(OM) of OM is cyclic and generated by the class [P]. Let h = # Cl(OM) and set I = {0 ≤i ≤h −1 : P−ni is principal}. Choose for each i ∈I a generator βi for P−ni. Let ϵ be a fundamental unit for M (recall that if κ = 5 then δ = 1 and so M is real). Let S = n ϵ jβi : −n−1 2 ≤j ≤n−1 2 , i ∈I o . Then there is some γ ∈S and α ∈OM such that (75) holds. Also, Norm(α) = 2µy for some µ ≥0. Proof. As κ = 5, we have that y is odd. Then (x + qkφ)OM = An, 1828 Michael A. Bennett and Samir Siksek where A is an ideal of OM with norm y. Since [P] generates the class group, the same is true of [P]−1. Hence [A] = [P]−i for some i ∈{0, 1, . . . , h −1}. Now (x + qkθ)OM = P−ni · (Pi · A)n. Since Pi · A is principal, it follows that P−ni is also principal. The lemma follows. □ Lemma 14.5. Let κ = 1. Suppose that the class group Cl(OM) of OM is cyclic and generated by the class [P]. Let h = # Cl(OM) and set I = {0 ≤i ≤h −1 : Pn(1−i)−2 is principal}. Choose for each i ∈I a generator βi for Pn(1−i)−2. Let S′ = {βi : i ∈I} ∪{βi : i ∈I}, where βi denotes the Galois conjugate of βi. Let S = {2 · β : β ∈S′} if δ = 0, {2 · ϵ j · β : −(n −1)/2 ≤j ≤(n −1)/2, β ∈S′} if δ = 1, where ϵ is a fundamental unit for M. Then there is some γ ∈S and α ∈OM such that (75) holds. Also, Norm(α) = 2µy for some µ ∈Z. Proof. As κ = 1, we have that y is even. Then x + qkφ 2  OM = Cn−2An, where A is an ideal of OM with norm y/2 and C is one of P, P. Since [P] generates the class group so does [C]−1. Hence [A] = [C]−i for some i ∈{0, 1, . . . , h −1}. Now x + qkφ 2  OM = Cn(1−i)−2 · (CiA)n. But Ci ·A is principal, whence Cn(1−i)−2 is principal, and so i ∈I and Cn(1−i)−2 is generated by either βi or βi. The lemma follows. □ We will now describe our refined sieve. Fix m ∈{0, 1, . . . , 2n} and suppose k ≡m (mod 2n). Let n be a prime ideal of O f above n. Let ℓ̸= q be a prime. Suppose (a) ℓ= tn + 1 for some positive integer t; (b) n ∤(4 −c2 ℓ); (c) (−1)δ+1q is a square modulo ℓ. We choose an integer s such that s2 ≡(−1)δ+1q (mod ℓ). Let L = ℓOM + (s −φ)OM. Differences between perfect powers: prime power gaps 1829 By the Dedekind–Kummer theorem ℓsplits in OM and L is one of the two prime ideals above ℓ. In particular, OM/L ∼ = Fℓand φ ≡s (mod L). Let Xℓ,m = {z ∈Fℓ: (z2 + (−1)δq2m+1)t ≡1 (mod ℓ)}, Yℓ,m = {z ∈Xℓ,m : aℓ(Gz,m) ≡cℓ(mod n)}, Uℓ,m = {(z, γ ) : z ∈Yℓ,m, γ ∈S, (z + qmφ)t ≡γ t (mod L)}, Wℓ,m = {γ : there exists z such that (z, γ ) ∈Uℓ,m}. Lemma 14.6. Let ℓ1, . . . , ℓr be primes ̸= q satisfying (a), (b) and (c) above. Let W = Wℓ1,...,ℓr = r \ i=1 Wℓi. If ¯ ρG,n ∼¯ ρ f,n, then there is some γ ∈W and some α ∈OM such that (75) holds. Proof. Suppose ℓsatisfies conditions (a), (b) and (c). As ℓsatisfies (a) and (b), it also satisfies hypotheses (i) and (ii) preceding the statement of Lemma 14.3. Write k = 2nb+m where b is a nonnegative integer, and let z ≡x/q2nb (mod ℓ). It follows from the proof of Lemma 14.3 that ℓ∤y and that z ∈Yℓ,m. We know from Lemmata 14.4 and 14.5 that there is some γ ∈S such that x + qkφ = γ αn where α ∈OM satisfies Norm(α) = 2µy for some µ ∈Z. Note that γ is supported only on the prime ideals above 2. Since L | ℓ, we have ordL(α) = ordL(γ ) = 0. Hence z + qmφ ≡ 1 q2nb (x + qkφ) ≡γ ·  α q2b  n (mod L). Since (OM/L)∗∼ = F∗ ℓis cyclic of order ℓ−1 = tn, we have (z + qmφ)t ≡γ t (mod L). Thus (z, γ ) ∈Uℓ,m and hence γ ∈Wℓ,m. The lemma follows. □ Proof of Proposition 14.1. Our proof of Proposition 14.1 is the result of applying Magma scripts based on Lemmata 14.2, 14.3 and 14.6, as well as solving a few Thue–Mahler equations. Our approach subdivides the proof into 60 cases corresponding to 60 quadruples (q, δ, κ, v): here (q, δ, κ) is one of the 30 triples in (74), and v ∈{0, 1}. Let (x, y, k) be a solution to (72) with prime exponent n ≥7. Suppose that y is even if κ = 1 and y is odd if κ = 5. Suppose, in addition, that k ≡v (mod 2). Our first step is to compute the newforms f of weight 2 and level N = 2κq. We know that for one these newforms f , we have ¯ ρG,n ∼¯ ρ f,n where G = Gx,k is the Frey–Hellegouarch curve given in Proposition 14.1, and n | n is a prime ideal of O f , the ring of integers of the Hecke eigenfield K f . Let p1, . . . , ps be the primes ≤200 distinct from 2 and q, and let E f = s X i=1 Epi, 1830 Michael A. Bennett and Samir Siksek where Epi is as in Lemma 14.2. It follows from Lemma 14.2 that if ¯ ρG,n ∼¯ ρ f,n then n | E f , and so n | Norm(E f ). We illustrate this by taking (q, δ, κ, v) = (31, 1, 5, 0). There are 8 newforms f1, . . . , f8 of weight 2 and level 2κq = 992, which all happen to be irrational. We find that Norm(E f j) = 7, 7, 210, 210, 23, 23, 26 × 32, 26 × 32, respectively for j = 1, 2, . . . , 8. Thus n = 7 and f = f1 or f2. We consider first f = f1 = q + √ 2q3 −q5 −(1 + √ 2)q7 −q9 + 2(1 − √ 2)q11 + · · · , with Hecke eigenfield K f = Q( √ 2) having ring of integers O f = Z[ √ 2]. We found that E f = (1+2 √ 2) which is one of the two prime ideals above 7. Hence n = (1 + 2 √ 2). Next we compute Z = Zℓ1,...,ℓ30 as in Lemma 14.3 where ℓ1, . . . , ℓ30 ̸= 31 are the 30 primes satisfying (i) and (ii) with t ≤200. We find that Z = {0, 8}. Thus, by Lemma 14.3, we have k ≡0 or 8 (mod 14). Now for m = 0 and m = 8, we compute W = Wℓ1,...,ℓ36 as in Lemma 14.6, where ℓ1, . . . , ℓ36 ̸= 31 are the 36 primes satisfying (a), (b) and (c) with t ≤800. We found that W = ∅for m = 0 and that W = {ϵ3} for m = 8 where ϵ = 1520+273 √ 31 is the fundamental unit of M = Q( √ 31). Hence we conclude, by Lemma 14.6, that k ≡8 (mod 14) and that x + 31k√ 31 = (1520 + 273 √ 31)3(X + Y √ 31)7, for some integers X, Y. Equating the coefficients of √ 31 on both sides results in a degree-7 Thue–Mahler equation with huge coefficients. However, using an algorithm of Stoll and Cremona for reducing binary forms we discover that this Thue–Mahler equation can be rewritten as 31k = −56U 7 + 112U 6V −84U 5V 2 + 140U 4V 3 + 490U 3V 4 + 1596U 2V 5 + 2807UV 6 + 2119V 7, where U, V ∈Z are related to X, Y via the unimodular substitution U = 2X + 11Y and V = 7X + 39Y. We applied the Thue–Mahler solver to this and found that it has no solutions. Next we take f = f2 which also has Hecke eigenfield K f = Q( √ 2). We apply Lemmata 14.2, 14.3 and 14.6 using the same sets of primes p j and ℓi as for f1. We find E f = (1 −2 √ 2), and so n = (1 −2 √ 2) and n = 7. Again we obtain Z = {0, 8} on applying Lemma 14.3. We find that W = ∅for m = 0 and W = {¯ ϵ3} for m = 8. Again the corresponding Thue–Mahler equation has no solutions. Thus (72) has no solutions with n ≥7 prime for q = 31, δ = 1 and with y odd (i.e., κ = 5) and k ≡0 (mod 2). We used the above approach to deal with all the cases where E f is nonzero. In all the cases where E f = 0, the newform f is rational, and in fact corresponds to an elliptic curve E/Q with nontrivial 2-torsion. These elliptic curves are listed in Table 3. Thus ¯ ρG,n ∼¯ ρE,n. What is required for Proposition 14.1 is to show in these cases that there are no solutions with prime 7 ≤n < 1000 apart from the ones listed in the statement of the proposition. We illustrate how this works by taking (q, δ, κ, v) = (7, 0, 1, 0). There is a unique newform f of weight 2 Differences between perfect powers: prime power gaps 1831 n Z 7 {0, 8, 12} 11 {8} 13 {4} 41 {44} other values ∅ Table 4. For the quadruple (q, δ, κ, v) = (7, 0, 1, 0) and for prime 7 ≤n < 1000 we computed Z = Zℓ1,...,ℓr as given by Lemma 14.3. Here we chose ℓ1, . . . , ℓr to be the primes ̸= q satisfying (i) and (ii) with t ≤200. and level N = 2κq = 14 which corresponds to the elliptic curve Y 2 + XY + Y = X3 + 4X −6 with Cremona label 14a1. For each prime 7 ≤n < 1000 we computed Z = Zℓ1,...,ℓr with ℓ1, . . . , ℓr being the primes ̸= 7 satisfying conditions (i), (ii) with t ≤200. The results of this computation are summarized in Table 4. Note that by Lemma 14.3, (k (mod 2n)) ∈Z. We deduce that there are no solutions for prime n satisfying 17 ≤n < 1000, n ̸= 41. For n = 7, 11, 13 and 41, and for each m in the corresponding Z, we compute W = Wℓ1,...,ℓr as in Lemma 14.6 where ℓ1, . . . , ℓr are now the primes ̸= q satisfying (a), (b) and (c) with t ≤800. We found that W = ∅in all cases except for n = 7, m = 0, when W = 11 −√−7. It follows from Lemma 14.6 that x +7k√−7 = (11−√−7)·α7 where α ∈Z[θ] where θ = (1+√−7)/2. Write α = (X + Yθ) with X, Y ∈Z. Thus x −7k 2 + 7k · θ = (6 −θ) · (X + Yθ)7. Equating the coefficients of θ on either side yields the Thue–Mahler equation −X7 + 35X6Y + 147X5Y 2 −105X4Y 3 −595X3Y 4 −231X2Y 5 + 161XY 6 + 45Y 7 = 7k. We find that the only solution is (X, Y, k) = (−1, 0, 0). Hence x = −11, and the corresponding solution to (72) is 112 + 7 = 27. We observe that −11 ≡1 (mod 4) which is consistent with our assumption x ≡1 (mod 4) if κ = 1, made in the statement of Proposition 14.1. The other cases are similar. Proofs of Theorems 2 and 3. We now deduce Theorems 2 and 3 from Proposition 14.1. These two theorems concern the equation x2 −q2k+1 = yn with n ≥3 and q ∤x. Thus we are in the δ = 1 case of the proposition. By the remarks following the statement of the proposition we are reduced to the case n ≥7 is prime. Theorem 2 is concerned with the primes q appearing in (4), whilst Theorem 3 deals with q = 41, 73, 89 and 97. A glance at Table 3 reveals that all the elliptic curves E appearing in Proposition 14.1 for the case δ = 1 in fact correspond to the values q = 41, 73, 89 and 97. Theorems 2 and 3 now follow immediately from the proposition. 1832 Michael A. Bennett and Samir Siksek n {ℓ1, . . . , ℓr} time (seconds) 210 + 7 = 1031 {2063, 12373, 30931} 0.18 211 + 5 = 2053 {94439, 110863, 143711, 168347, 197089} 7.75 212 + 3 = 4099 {73783, 98377, 114773} 4.39 213 + 17 = 8209 {246271, 525377, 574631} 15.50 214 + 27 = 16411 {98467, 459509, 590797} 6.19 215 + 3 = 32771 {65543, 983131, 1179757} 3.91 216 + 1 = 65537 {917519, 1310741, 1703963, 2359333} 57.51 217 + 29 = 131101 {2097617, 9439273, 11799091, 12585697} 142.59 218 + 3 = 262147 {1048589, 4194353, 6291529} 65.89 219 + 21 = 524309 {6291709, 10486181, 23069597} 402.12 220 + 7 = 1048583 {20971661, 25165993, 44040487} 1319.57 221 + 17 = 2097169 {37749043, 176162197, 188745211} 2468.46 222 + 15 = 4194319 {75497743, 92275019, 100663657} 4983.07 Table 5. Write nu for the smallest prime > 2u. For 10 ≤u ≤22 the prime n = nu belongs to the range 1000 < n < 6 × 106. The table lists the primes n = nu in this range and, for each, a set of primes ℓ1, . . . , ℓr satisfying conditions (i), (ii) such that Zℓ1,...,ℓr = ∅. It also records the time the computation took for each of these values of n, on a single processor. Remark. It is well-known that the exponent n can be explicitly bounded in (72) in terms of the prime q. For example, if δ = 1 and κ = 5 (i.e., y is odd) then Bugeaud showed that n ≤4.5 × 106q2 log2 q. (76) Let (q, δ, κ, v) = (73, 1, 5, 1) and E be the elliptic curve with Cremona label 2336a1; this is one of the two outstanding cases from Table 3 for which the bound (76) is applicable. We are in fact able to substantially improve this bound for the case in consideration through a specialization and minor refinement (we omit the details) of Bugeaud’s approach and deduce that n < 6 × 106. Theorem 3 only resolves x2 −732k+1 = yn for 3 ≤n ≤1000. It is natural to ask whether we can apply the same technique, namely Lemma 14.3, to show that there are no solutions for prime exponents n in the range 1000 < n < 6 × 106. Write nu for the smallest prime > 2u. For 10 ≤u ≤22 the prime n = nu belongs to the range 1000 < n < 6 × 106. For each of these 13 primes we computed primes ℓ1, . . . , ℓr satisfying conditions (i) and (ii) such that Zℓ1,...,ℓr = ∅, whence by Lemma 14.3 there are no solutions for that particular exponent n. Table 5 records the values of ℓ1, . . . , ℓr as well as the time taken to perform the corresponding computation in Magma on a single processor. There are 412681 primes in the range 1000 < n < 6 × 106. On the basis of the timing in the table we crudely estimate that it would take around 60 years to carry out the computation (on a single processor) for all 412681 primes. Differences between perfect powers: prime power gaps 1833 q Cremona label for E minimal model for E 7 14a1 Y 2 + XY + Y = X3 + 4X −6 23 46a1 Y 2 + XY = X3 −X2 −10X −12 31 62a1 Y 2 + XY + Y = X3 −X2 −X + 1 47 94a1 Y 2 + XY + Y = X3 −X2 −1 71 142c1 Y 2 + XY = X3 −X2 −X −3 79 158e1 Y 2 + XY + Y = X3 + X2 + X + 1 Table 6. Elliptic curve E of conductor 2q and nontrivial 2-torsion. We shall shortly give a substantially faster method for treating the case δ = 0. Alas this method is not available for δ = 1, as we explain in due course. 15. The proof of Theorem 1: large exponents We now complete the proof of Theorem 1 which is concerned, for prime 3 ≤q < 100, with the equation x2 + qα = yn, subject to the assumptions that q ∤x and n ≥3. The exponents n = 3 and n = 4 were treated in Section 2, so we may suppose that n ≥5 is prime. The case α = 2k was handled in Section 5, so we suppose further that α = 2k + 1. The case with y odd was the topic of Section 11, so we may assume that y is even. Finally, the case with exponent n = 5 was resolved in Section 12, whence we may suppose that n ≥7 is prime. To summarize, we are reduced to treating the equation x2 + q2k+1 = yn, k ≥0, q ∤x, y even, n ≥7 prime. (77) By Proposition 14.1, we may in fact suppose that n > 1000 and that q ∈{7, 23, 31, 47, 71, 79}. (78) For convenience, we restate Proposition 14.1 specialized to our current situation. Lemma 15.1. Let q be one of the values in (78). Let (x, y, k) satisfy (77), where n > 1000 is prime. Suppose, without loss of generality, that x ≡1 (mod 4). Attach to this solution the Frey–Hellegouarch elliptic curve G = Gx,k : Y 2 = X3 + 4x X2 + 4(x2 + q2k+1)X. Then ¯ ρG,n ∼¯ ρE,n where E is an elliptic curve of conductor 2q and nontrivial 2-torsion given in Table 6. Upper bounds for n: linear forms in logarithms, complex and q-adic. We will appeal to bounds for linear forms in logarithms to deduce an upper bound for the prime exponent n in (77) where q belongs to (78). 1834 Michael A. Bennett and Samir Siksek Proposition 15.2. Let q belong to the list (78). Let (x, y, k) satisfy (77) with prime exponent n > 1000. Then n < Uq where Uq =                  2.8 × 108 if q = 7, 1.1 × 109 if q = 23, 5.0 × 108 if q = 31, 2.2 × 109 if q = 47, 2.3 × 109 if q = 71, 2.2 × 109 if q = 79. (79) To obtain this result, our first order of business will be to produce a lower bound upon y. Lemma 15.3. If there exists a solution to (77), then y > 4n −4 √ 2n + 2. Proof. We suppose without loss of generality that x ≡1 (mod 4), so that we can apply Lemma 15.1. We first show that y is divisible by an odd prime. Suppose otherwise and write y = 2µ with µ ≥1. Then the Frey–Hellegouarch curve Gx,k has conductor 2q and minimal discriminant −22nµ−12q2k+1. A short search of Cremona’s tables reveals that there are no such elliptic curves for the values q in (78) (recall that n > 1000). Thus, there necessarily exists an odd prime p | y; since q ∤y, we observe that q ̸= p. By Lemma 7.1, ap(E) ≡±(p + 1) (mod n), where E is given by Table 6. As E has nontrivial 2-torsion, we conclude that 2n | (p + 1 ∓ap(E)). However, from the Hasse–Weil bounds, 0 < p + 1 ∓ap(E) < (√p + 1)2 ≤( p y/2 + 1)2, and therefore 2n < (√y/2 + 1)2. The desired inequality follows. □ Now let q be any of the values in (78), write M = Q(√−q), and let OM be its ring of integers. Note that the units of OM are ±1. Fix P to be one of the two prime ideals of OM above 2. After possibly replacing x by −x we have x + qk√−q 2 · OM = Pn−2 · An, (80) where A is an ideal of OM with norm y/2. Hence x −qk√−q x + qk√−q = P P  2 · P · A P · A  n . For all six values of q under consideration, the class group is cyclic and generated by the class [P]. Let hq be the class number of M; this value is respectively 1, 3, 3, 5, 7 and 5 for q in (78) (see Table 7). As n > 1000 is prime, gcd(n, hq) = 1. Since OM has class number hq, it follows that Phq is principal, say Phq = (αq) · OM. We fix our choice of P so that αq is given by Table 7. Write βq = αq/αq. Thus x −qk√−q x + qk√−q  hq = β2 qγ n, (81) Differences between perfect powers: prime power gaps 1835 q 7 23 31 47 71 79 hq 1 3 3 5 7 5 αq 1+√−7 2 3+ √ −23 2 1+ √ −31 2 9+ √ −47 2 21+ √ −71 2 7+ √ −79 2 Table 7. Here, hq denotes the class number of M = Q(√−q), and αq is a generator for the principal ideal Phq, where P is one of the two prime ideals of OM above 2. where γ ∈M is some generator for the principal ideal ((P · A)/(P · A))hq. To derive an upper bound on n, we begin by using (81) to find a “small” linear form in logarithms.Write 3 = log x −qk√−q x + qk√−q  . Lemma 15.4. If there exists a solution to (77) with yn > 100 q2k+1, then log |3| < 0.75 +  k + 1 2  log q −n 2 log y. Proof. The assumption that yn > 100q2k+1, together with, say, Lemma B.2 of [Smart 1998], implies that |3| ≤−10 log  9 10  x −qk√−q x + qk√−q −1 = −20 log  9 10 qk√q yn/2 , whence the lemma follows. □ To show that log |3| here is indeed small, we first require an upper bound upon k. From (81), we have x −qk√−q x + qk√−q  hq −1 = β2 qγ n −1 and so −2qk√−q x + qk√−q hq−1 X i=0 x −qk√−q x + qk√−q  i = β2 qγ n −1. (82) Since gcd(x, q) = 1, it follows from (82) that, if we set 31 = γ n −βq2, then νq(31) ≥k. To complement this with an upper bound for linear forms in q-adic logarithms, we appeal to Theorem 10, with q ∈{7, 23, 31, 47, 79}, f = 1, D = 2, α1 = γ, α2 = βq, b1 = n, b2 = 2, log A1 = hq 2 log y, log A2 = 1 2 log q and b′ = n log q + 2 hq log y . Here, we use Lemma 13.2 of Bugeaud, Mignotte and Siksek [Bugeaud et al. 2006] which implies that h(α1) = hq 2 log y and h(α2) = hq 2 log 2. 1836 Michael A. Bennett and Samir Siksek In the case q = 71, we make identical choices except to take log A2 = 7 2 log 2, whence b′ = n 7 log 2 + 2 7 log y . Theorem 10 thus yields the inequality νq(31) ≤96 q hq log3 q · (max{log b′ + log log q + 0.4, 5 log q})2 log y, for q ∈{7, 23, 31, 47, 79}, and ν71(31) ≤701.2 · (max{log b′ + log log 71 + 0.4, 5 log 71})2 log y, if q = 71. Let us now suppose that n > 108, (83) which will certainly be the case if n ≥Uq, for Uq as defined in (79). Then, from Lemma 15.3, in all cases we have that b′ < 1.001 n log q and hence obtain the inequalities k < 96 q hq log3 q · (max{log n + 0.4001, 5 log q})2 log y, if q ∈{7, 23, 31, 47, 79} (84) and k < 701.2 · (max{log n + 0.4001, 5 log 71})2 log y, if q = 71. (85) Now consider 32 = hq log x −qk√−q x + qk√−q = n log(ϵ1γ ) + 2 log(ϵ2βq) + jπi, (86) where we take the principal branches of the logarithms and the integers ϵi ∈{−1, 1} and j are chosen so that Im(log(ϵ1γ )) and Im(log(ϵ2βq)) have opposite signs, and we have both |log(ϵ2βq)| < π 2 and |32| minimal. Explicitly, q 7 23 31 47 71 79 ϵ2 −1 −1 −1 1 1 −1 |log(ϵ2βq)| arccos 3 4 arccos 7 16 arccos 15 16 arccos 17 64 arccos 185 256 arccos 15 64 Assume first that yn ≤100q2k+1. Differences between perfect powers: prime power gaps 1837 If q ∈{7, 23, 31, 47, 79}, it follows from (84) that n < 2 log 10 log y + log q log y + 192 q hq log2 q · (max{log n + 0.4001, 5 log q})2, in each case contradicting Lemma 15.3 and (83). We obtain a similar contradiction in case the q = 71 upon considering (85). It follows, then, that we may assume yn > 100q2k+1 and hence conclude, from Lemma 15.4, that log |32| < log hq + 0.75 +  k + 1 2  log q −n 2 log y. If q ∈{7, 23, 31, 47, 79}, (84) thus implies that log |32| < log hq + 0.75 + 1 2 log q + 96 q hq log2 q · (max{log n + 0.4001, 5 log q})2 log y −n 2 log y. An analogous inequality holds for q = 71, upon appealing to (85). From Lemma 15.3 and (83), we find that log |32| < −κq n log y, (87) where κq =              0.499 if q = 7, 0.497 if q ∈{23, 31}, 0.494 if q = 47, 0.486 if q = 71, 0.490 if q = 79. (88) It therefore follows from the definition of 32 that | j| π < πn + 2 arccos 15 64 + y−0.486n < πn + π, and so | j| ≤n. (89) Linear forms in three logarithms. To deduce an initial lower bound upon the linear form in logarithms |32|, we will use the following. Theorem 14 [Matveev 2000, Theorem 2.1]. Let K be an algebraic number field of degree D over Q and put χ = 1 if K is real, χ = 2 otherwise. Suppose that α1, α2, . . . , αn0 ∈K∗with absolute logarithmic heights h(αi) for 1 ≤i ≤n0, and suppose that Ai ≥max{Dh(αi), |log αi|}, 1 ≤i ≤n0, for some fixed choice of the logarithm. Define 3 = b1 log α1 + · · · + bn0 log αn0, where the bi are integers and set B = max n 1, max n |bi| Ai An0 : 1 ≤i ≤n0 oo . 1838 Michael A. Bennett and Samir Siksek Define, with e := exp(1),  = A1 · · · An0, C(n0) = C(n0, χ) = 16 n0!χ en0(2n0 + 1 + 2χ)(n0 + 2)(4n0 + 4)n0+1en0 2  χ , C0 = log(e4.4n0+7n5.5 0 D2 log(eD)) and W0 = log(1.5eBD log(eD)). Then, if log α1, . . . , log αn0 are linearly independent over Z and bn0 ̸= 0, we have log |3| > −C(n0) C0 W0 D2 . We apply Theorem 14 to 3 = 32 with D = 2, χ = 2, n0 = 3, b3 = n, α3 = ϵ1γ, b2 = −2, α2 = ϵ2βq, b1 = j and α1 = −1. We may thus take A3 = log y, A2 = max{hq log 2, |log(ϵ2βq)|}, A1 = π and B = n. Since 4 C(3) C0 = 218 · 3 · 5 · 11 · e5 · log(e20.2 · 35.5 · 4 log(2e)) < 1.80741 × 1011, and W0 = log(3en log(2e)) < 2.63 + log n, we may therefore conclude that log |32| > −5.68 × 1011 max  hq log 2, |log(ϵ2βq)| (2.63 + log n) log y. It thus follows from (87) that n < κ−1 q 5.68 × 1011 max  hq log 2, |log(ϵ2βq)| (2.63 + log n) and hence n <          2.77 × 1013 if q = 7, 8.24 × 1013 if q ∈{23, 31}, 1.42 × 1014 if q ∈{47, 79}, 2.02 × 1014 if q = 71. (90) To improve these inequalities, we appeal to a sharper, rather complicated, lower bound for linear forms in three complex logarithms, due to Mignotte [2008, Theorem 2]. Our argument is very similar to that employed in a recent paper of the authors [Bennett and Siksek 2023]. We note that recent work of Mignotte and Voutier would substantially improve our bounds (and reduce our subsequent computations considerably). Theorem 15 (Mignotte). Consider three nonzero algebraic numbers α1, α2 and α3, which are either all real and > 1, or all complex of modulus one and all ̸= 1. In addition, assume that the three numbers α1, α2 and α3 are either all multiplicatively independent, or that two of the numbers are multiplicatively Differences between perfect powers: prime power gaps 1839 independent and the third is a root of unity. We also consider three positive rational integers b1, b2, b3 with gcd(b1, b2, b3) = 1, and the linear form 3 = b2 log α2 −b1 log α1 −b3 log α3, where the logarithms of the αi are arbitrary determinations of the logarithm, but which are all real or all purely imaginary. We assume that 0 < |3| < 2π w , where w is the maximal order of a root of unity in Q(α1, α2, α3). Suppose further that b2|log α2| = b1 |log α1| + b3 |log α3| ± |3| (91) and put d1 = gcd(b1, b2), d3 = gcd(b3, b2) and b2 = d1b′ 2 = d3b′′ 2 Let K, L, R, R1, R2, R3, S, S1, S2, S3, T, T1, T2, T3 be positive rational integers with K ≥3, L ≥5, R > R1 + R2 + R3, S > S1 + S2 + S3 and T > T1 + T2 + T3. Let ρ ≥2 be a real number. Let a1, a2 and a3 be real numbers such that ai ≥ρ|log αi| −log |αi| + 2D h(αi), i ∈{1, 2, 3}, where D = [Q(α1, α2, α3) : Q]/[R(α1, α2, α3) : R], and set U =  K L 2 + L 4 −1 −2K 3L  log ρ. Assume further that U ≥(D + 1) log(K 2L) + gL(a1R + a2S + a3T ) + D(K −1) log b −2 log e 2, (92) where g = 1 4 − K 2L 12RST and b = (b′ 2η0)(b′′ 2ζ0)  K−1 Y k=1 k!  −4/(K(K−1)) , with η0 = R −1 2 + (S −1)b1 2b2 and ζ0 = T −1 2 + (S −1)b3 2b2 . Put V = p (R1 + 1)(S1 + 1)(T1 + 1). If, for some positive real number χ, we have (i) (R1 + 1)(S1 + 1)(T1 + 1) > KM, (ii) Card{αr 1αs 2αt 3 : 0 ≤r ≤R1, 0 ≤s ≤S1, 0 ≤t ≤T1} > L, (iii) (R2 + 1)(S2 + 1)(T2 + 1) > 2K 2, (iv) Card{αr 1αs 2αt 3 : 0 ≤r ≤R2, 0 ≤s ≤S2, 0 ≤t ≤T2} > 2K L, and 1840 Michael A. Bennett and Samir Siksek (v) (R3 + 1)(S3 + 1)(T3 + 1) > 6K 2L, where M = max{R1 + S1 + 1, S1 + T1 + 1, R1 + T1 + 1, χV}, then either |3| · LSeLS|3|/(2b2) 2|b2| > ρ−K L, (93) or at least one of the following conditions holds: (C1) |b1| ≤R1 and |b2| ≤S1 and |b3| ≤T1. (C2) |b1| ≤R2 and |b2| ≤S2 and |b3| ≤T2. (C3) Either there exist nonzero rational integers r0 and s0 such that r0b2 = s0b1 (94) with |r0| ≤(R1 + 1)(T1 + 1) M −T1 and |s0| ≤(S1 + 1)(T1 + 1) M −T1 , (95) or there exist rational integers r1, s1, t1 and t2, with r1s1 ̸= 0, such that (t1b1 +r1b3)s1 = r1b2t2, gcd(r1, t1) = gcd(s1, t2) = 1, (96) which also satisfy |r1s1| ≤gcd(r1, s1) · (R1 + 1)(S1 + 1) M −max{R1, S1}, |s1t1| ≤gcd(r1, s1) · (S1 + 1)(T1 + 1) M −max{S1, T1} and |r1t2| ≤gcd(r1, s1) · (R1 + 1)(T1 + 1) M −max{R1, T1}. Also, when t1 = 0 we can take r1 = 1, and when t2 = 0 we can take s1 = 1. We will apply this result to 3 = 32. For simplicity, we will provide full details for the case q = 7; the arguments for the other values of q under consideration are similar and follow closely their analogues in [Bennett and Siksek 2023]. If j = 0, then 32 immediately reduces to a linear form in two logarithms and we may appeal to Theorem 11, with (in the notation of that result) c2 = n, β2 = ϵ1γ, c1 = 2, β1 = 1 ϵ2βq , D = 1, whence we may choose log B2 = 1 2 log y and log B1 = 1. We thus have, from (83) and Lemma 15.3, b′ = 4 log y + n < 1.001n. Differences between perfect powers: prime power gaps 1841 From Theorem 11 with (m, C) = (10, 32.3), it follows, again from (83), that log |32| ≥−64.6(log n + 0.211)2 log y. Combining this with inequality (87) contradicts (83). We argue similarly if j = ±n, again reaching a contradiction via bounds for linear forms in two complex logarithms. We may thus suppose that j ̸= 0 and | j| < n (so that, in particular, gcd( j, n) = 1), and hence choose b1 = 2, α1 = 1 ϵ2βq , b2 = n, α2 = ϵ1γ, b3 = −j and α3 = −1. (97) From the fact that Im(log(ϵ1γ )) and Im(log(ϵ2βq)) have opposite signs, (91) is satisfied and we have d1 = d3 = 1 and b′ 2 = b′′ 2 = n. It follows that h(α1) = 1 2 log(2), h(α2) = 1 2 log(y) and h(α3) = 0, and hence we can take a1 = ρ arccos 3 4 + log 2, a2 = ρπ + log y and a3 = ρπ. As noted in [Bugeaud et al. 2006], if we suppose that m ≥1 and define K = [mLa1a2a3], R1 = [c1a2a3], S1 = [c1a1a3], T1 = [c1a1a2], R2 = [c2a2a3], S2 = [c2a1a3], T2 = [c2a1a2], R3 = [c3a2a3], S3 = [c3a1a3], T3 = [c3a1a2], (98) where c1 = max n (χmL)2/3, 2mL a1  1/2o , c2 = max n 21/3(mL)2/3,  m a1  1/2 L o and c3 = (6m2)1/3L, (99) then conditions (i)–(v) are automatically satisfied. It remains to verify inequality (92). To carry this out, we optimize numerically over values of ρ, L, m and χ as in [Bennett and Siksek 2023] (full details are available there, by way of example, in the case q = 7). Pari/GP code for carrying this out is due to Voutier . In each case, we obtain a sharpened upper bound upon the exponent n, provided inequality (93) holds. If, on the other hand, inequality (93) fails to be satisfied, from inequality (83) and our choices of S1 and S2, necessarily (C3) holds and we may rewrite 32 as a linear form in two complex logarithms to which we can apply Theorem 11. In this case, we once again obtain a sharpened upper bound for n. Iterating this process leads to the upper bounds Uq given in (79). We observe that direct application of the new bounds from [Mignotte and Voutier 2022], with the corresponding Pari/GP code, substantially sharpens these bounds, though this is not especially important for our purposes. This completes the proof of Proposition 15.2. 1842 Michael A. Bennett and Samir Siksek Proof of Theorem 1. We now finish the proof of Theorem 1. By the remarks at the beginning of the current section, we are reduced to considering solutions (x, y, k) to (77), where q belongs to (78). Thanks to Propositions 14.1 and 15.2, we may suppose that the prime exponent n belongs to the range 1000 < n < Uq where Uq is given by (79). Lemma 15.5. Let (x, y, k) be a solution to (77) where q belongs to (78) and the exponent n is a prime belonging to the range 1000 < n < Uq. Let M = Q(√−q). Let hq and αq be as in Table 7, and choose i to be the unique integer 0 ≤i ≤hq −1 satisfying ni ≡−2 (mod hq). Write n∗= (−ni −2)/hq. Then, after possibly changing the sign of x, x + qk√−q 2 = αn∗ q · γ n, (100) where γ ∈OM. Additionally, Norm(γ ) = 2i−1y. Proof. Recall that hq is the class number of M, and that Phq = αqOM, where P is one of the two prime ideals of OM above 2. From (78), after possibly replacing x by −x, x + qk√−q 2  · OM = P−2 · An, where A is an ideal of OM of norm y/2. Now, for the values of q we are considering, the class group is cyclic and generated by [P]. Thus there is some 0 ≤i ≤hq −1 such that PiA is principal. However, x + qk√−q 2  · OM = P−ni−2 · (Pi · A)n. We deduce that P−ni−2 is principal. As the class [P] generates the class group, we infer that i is the unique integer 0 ≤i ≤hq −1 satisfying ni ≡−2 (mod hq). Write n∗= (−ni −2)/hq. As Phq = αq, we have P−ni−2 = αn∗ q · OM. Hence x + qk√−q 2 = αn∗ q · γ n, where γ ∈OM is a generator for the principal ideal PiA. We note that Norm(γ ) = 2i−1y. □ The following lemma, inspired by ideas of Kraus , provides a computational framework for showing that (77) has no solutions for a particular exponent n. Lemma 15.6. Let q belong to the list (78) and let βq = αq/αq. Let n be a prime belonging to the range 1000 < n < Uq. Let E be the elliptic curve given in Table 6. Let ℓ̸= q be a prime satisfying the three conditions (I) −q is a square modulo ℓ; (II) ℓ= tn + 1 for some positive integer t; (III) aℓ(E)2 ̸≡4 (mod n). Differences between perfect powers: prime power gaps 1843 Let L be one of the two prime ideals of OM above ℓ, and write FL = OM/L ∼ = Fℓ. Let β ∈FL satisfy β ≡αq/αq (mod L). Choose g to be a cyclic generator for F∗ L, set h = gn, and define Xℓ,n = {βn∗· h j : j = 0, 1, . . . , t −1} ⊂FL. For x ∈Xℓ,n let Ex : Y 2 = X(X + 1)(X + x). Finally, define Yℓ,n = {x ∈X : aL(Ex)2 ≡aℓ(E)2 (mod n)}. If Yℓ,n = ∅, then (77) has no solutions. Proof. Suppose that (x, y, k) is a solution to (77) for our particular pair (q, n). We change the sign of x if necessary so that (100) holds and let x′ = ±x so that x′ ≡1 (mod 4). By Lemma 15.1, we know that ¯ ρGx′,k,n ∼¯ ρE,n. Observe that Gx′,k is either the same elliptic curve as Gx,k if x′ = x, or it is a quadratic twist by −1 if x′ = −x. Hence aℓ(Gx,k) = ±aℓ(Gx′,k) for any odd prime ℓof good reduction for either (and hence both) elliptic curves. We let ℓbe a prime satisfying conditions (I), (II) and (III). From (III) and (II), we note that aℓ(E) ̸≡±(ℓ+1) (mod n). It follows from Lemma 7.1 that ℓ∤y, and that aℓ(Gx′,k) ≡aℓ(E) (mod n). Thus aℓ(Gx,k)2 ≡aℓ(E)2 (mod n). By Lemma 15.5, identity (100) holds where Norm(γ )=2i−1y. In particular, L is disjoint from the support of γ and αq. It follows from (100) that x −qk√−q x + qk√−q = αq αq  n∗ · γ γ  n . As g is a generator of F∗ L which is cyclic of order ℓ−1 = tn, and as h = gn, there is some 0 ≤j ≤t −1 such that (γ /γ )n ≡h j (mod L). Hence x −qk√−q x + qk√−q ≡x (mod L), for some x ∈Xℓ,n. The Frey–Hellegouarch curve Gx,k defined in Lemma 15.1 can be rewritten as Y 2 = X(X + 2(x −qk√−q))(X + 2(x + qk√−q)) and hence modulo L is a quadratic twist of Ex. We deduce that aL(Ex)2 = aℓ(Gx,k)2 ≡aℓ(E)2 (mod n), whence x ∈Yℓ,n. This completes the proof. □ To finish the proof of Theorem 1, we wrote a Magma script which, for each q in (78) and each prime n in the interval 1000 < n < Uq, found a prime ℓsatisfying conditions (I), (II) and (III), with Yℓ,n = ∅. The following table gives the approximate time taken for this computation, on a single processor: q 7 23 31 47 71 79 time (hours) 115 450 226 988 1058 1019 1844 Michael A. Bennett and Samir Siksek As one may observe from our proofs, for a given q, the upper bound Uq upon n in (77), coming from bounds for linear forms in logarithms, depends strongly upon the class number of Q(√−q). It is this dependence which makes extending Theorem 1 to larger values of q an expensive proposition, computationally. 16. Concluding remarks There are quite a few additional Frey–Hellegouarch curves at our disposal, that might prove helpful in completing the solution of (5), for some of our problematical values of q. A number of these arise from considering (5) as a special case of x2 −qδzκ = yn, where, say, κ ∈{3, 4, 6} and 0 ≤δ < κ. In each case, the dimensions of the spaces of modular forms under consideration grow quickly, complicating matters. This is particularly true if κ ∈{4, 6}, where our Frey–Hellegouarch curve will a priori be defined over Q(√q), and so the relevant modular forms are Hilbert modular forms which are more challenging to compute than classical modular forms. In the case y is even in (5) (whence we are in the situation where our bounds coming from linear forms in logarithms are weaker), we can attach a Frey–Hellegouarch Q-curve to a potential solution (which at least corresponds to a classical modular form). To do this, write M = Q(√q) and OM for the ring of integers of M. Assuming that M has class number one (which is the case for, say, the remaining values q ∈{41, 89, 97}), we have x + qk√q 2 = δrγ n−2αn for some r ∈Z and α ∈OM. Here, δ is a fundamental unit for OM and γ is a suitably chosen generator for one of the two prime ideals above 2 in M. From this equation, qk√q = δrγ n−2αn −δrγ n−2αn. Treating this as a ternary equation of signature (n, n, 2), we can attach to such a solution a Frey– Hellegouarch Q-curve; see, for example, [van Langen 2021, Section 6]. We will not pursue this here. Acknowledgments The authors are grateful to the anonymous referee for numerous insightful comments, and to Pedro-José Carzola Garcia, Ritesh Goenka and Vandita Patel for finding errors in earlier drafts of this paper. References [Arif and Abu Muriefah 2002] S. A. Arif and F. S. Abu Muriefah, “On the Diophantine equation x2 + q2k+1 = yn”, J. Number Theory 95:1 (2002), 95–100. MR Zbl [Barros 2010] C. F. Barros, On the Lebesgue–Nagell equation and related subjects, Ph.D. thesis, University of Warwick, 2010. [Bauer and Bennett 2002] M. Bauer and M. A. Bennett, “Applications of the hypergeometric method to the generalized Ramanujan–Nagell equation”, Ramanujan J. 6:2 (2002), 209–270. MR Zbl Differences between perfect powers: prime power gaps 1845 [Bennett and Siksek 2023] M. A. Bennett and S. Siksek, “Differences between perfect powers: the Lebesgue–Nagell equation”, Trans. Amer. Math. Soc. 376:1 (2023), 335–370. MR Zbl [Bennett and Skinner 2004] M. A. Bennett and C. M. Skinner, “Ternary Diophantine equations via Galois representations and modular forms”, Canad. J. Math. 56:1 (2004), 23–54. MR Zbl [Bennett et al. 2010] M. A. Bennett, J. S. Ellenberg, and N. C. Ng, “The Diophantine equation A4 +2δ B2 = Cn”, Int. J. Number Theory 6:2 (2010), 311–338. MR Zbl [Bennett et al. 2022] M. A. Bennett, A. Gherga, V. Patel, and S. Siksek, “Odd values of the Ramanujan tau function”, Math. Ann. 382:1-2 (2022), 203–238. MR Zbl [Bérczes and Pink 2008] A. Bérczes and I. Pink, “On the Diophantine equation x2 + p2k = yn”, Arch. Math. (Basel) 91:6 (2008), 505–517. MR Zbl [Bérczes and Pink 2012] A. Bérczes and I. Pink, “On the Diophantine equation x2 + d2l+1 = yn”, Glasg. Math. J. 54:2 (2012), 415–428. MR Zbl [Bilu et al. 2001] Y. Bilu, G. Hanrot, and P. M. Voutier, “Existence of primitive divisors of Lucas and Lehmer numbers”, J. Reine Angew. Math. 539 (2001), 75–122. MR Zbl [Bosma et al. 1997] W. Bosma, J. Cannon, and C. Playoust, “The Magma algebra system, I: The user language”, J. Symbolic Comput. 24:3-4 (1997), 235–265. MR Zbl [Breuil et al. 2001] C. Breuil, B. Conrad, F. Diamond, and R. Taylor, “On the modularity of elliptic curves over Q: wild 3-adic exercises”, J. Amer. Math. Soc. 14:4 (2001), 843–939. MR Zbl [Bugeaud 1997] Y. Bugeaud, “On the Diophantine equation x2 −pm = ±yn”, Acta Arith. 80:3 (1997), 213–223. MR Zbl [Bugeaud and Laurent 1996] Y. Bugeaud and M. Laurent, “Minoration effective de la distance p-adique entre puissances de nombres algébriques”, J. Number Theory 61:2 (1996), 311–342. MR Zbl [Bugeaud et al. 2006] Y. Bugeaud, M. Mignotte, and S. Siksek, “Classical and modular approaches to exponential Diophantine equations, II: The Lebesgue–Nagell equation”, Compos. Math. 142:1 (2006), 31–62. MR Zbl [Carmichael 1913] R. D. Carmichael, “On the numerical factors of the arithmetic forms αn ± βn”, Ann. of Math. (2) 15:1-4 (1913), 49–70. MR Zbl [Cohn 1997] J. H. E. Cohn, “The Diophantine equation x4 −Dy2 = 1, II”, Acta Arith. 78:4 (1997), 401–403. MR Zbl [Cohn 2003] J. H. E. Cohn, “The Diophantine equation xn = Dy2 + 1”, Acta Arith. 106:1 (2003), 73–83. MR Zbl [Cremona 1997] J. E. Cremona, Algorithms for modular elliptic curves, 2nd ed., Cambridge Univ. Press, 1997. MR Zbl [Gherga and Siksek 2022] A. Gherga and S. Siksek, “Efficient resolution of Thue–Mahler equations”, preprint, 2022. arXiv 2207.14492 [Halberstadt and Kraus 2002] E. Halberstadt and A. Kraus, “Courbes de Fermat: résultats et problèmes”, J. Reine Angew. Math. 548 (2002), 167–234. MR Zbl [Ivorra 2003] W. Ivorra, “Sur les équations x p +2β y p = z2 et x p +2β y p = 2z2”, Acta Arith. 108:4 (2003), 327–338. MR Zbl [Ivorra and Kraus 2006] W. Ivorra and A. Kraus, “Quelques résultats sur les équations ax p + by p = cz2”, Canad. J. Math. 58:1 (2006), 115–153. MR Zbl [Koutsianas 2020] A. Koutsianas, “An application of the modular method and the symplectic argument to a Lebesgue–Nagell equation”, Mathematika 66:1 (2020), 230–244. MR Zbl [Kraus 1997] A. Kraus, “Majorations effectives pour l’équation de Fermat généralisée”, Canad. J. Math. 49:6 (1997), 1139–1161. MR Zbl [Kraus 1998] A. Kraus, “Sur l’équation a3 + b3 = cp”, Exp. Math. 7:1 (1998), 1–13. MR Zbl [Kraus and Oesterlé 1992] A. Kraus and J. Oesterlé, “Sur une question de B. Mazur”, Math. Ann. 293:2 (1992), 259–275. MR Zbl [van Langen 2021] J. M. van Langen, “On the sum of fourth powers in arithmetic progression”, Int. J. Number Theory 17:1 (2021), 191–221. MR Zbl [Laurent 2008] M. Laurent, “Linear forms in two logarithms and interpolation determinants, II”, Acta Arith. 133:4 (2008), 325–348. MR Zbl 1846 Michael A. Bennett and Samir Siksek [Matveev 2000] E. M. Matveev, “An explicit lower bound for a homogeneous rational linear form in logarithms of algebraic numbers, II”, Izv. Ross. Akad. Nauk Ser. Mat. 64:6 (2000), 125–180. In Russian; translated in Izv. Math. 64:6 (2000), 1217–1269. MR Zbl [Mazur 1978] B. Mazur, “Rational isogenies of prime degree”, Invent. Math. 44:2 (1978), 129–162. MR Zbl [Mignotte 2008] M. Mignotte, “A kit on linear forms in three logarithms”, preprint, 2008, available at ~bugeaud/travaux/kit.pdf. [Mignotte and Voutier 2022] M. Mignotte and P. Voutier, “A kit for linear forms in three logarithms”, preprint, 2022. arXiv 2205.08899 [Mih˘ ailescu 2004] P. Mih˘ ailescu, “Primary cyclotomic units and a proof of Catalan’s conjecture”, J. Reine Angew. Math. 572 (2004), 167–195. MR Zbl [Peth˝ o et al. 1999] A. Peth˝ o, H. G. Zimmer, J. Gebel, and E. Herrmann, “Computing all S-integral points on elliptic curves”, Math. Proc. Cambridge Philos. Soc. 127:3 (1999), 383–402. MR Zbl [Pillai 1936] S. S. Pillai, “On ax −by = c”, J. Indian Math. Soc. 2 (1936), 119–122. Correction in 2 (1937), 215. Zbl [Ribet 1990] K. A. Ribet, “On modular representations of Gal(Q/Q) arising from modular forms”, Invent. Math. 100:2 (1990), 431–476. MR Zbl [Ribet 1997] K. A. Ribet, “On the equation a p + 2αbp + cp = 0”, Acta Arith. 79:1 (1997), 7–16. MR Zbl [Serre 1975] J.-P. Serre, “Divisibilité de certaines fonctions arithmétiques”, exposé 20 in Séminaire Delange–Pisot–Poitou: théorie des nombres, 1974/1975, I, Secrétariat Math., Paris, 1975. MR Zbl [Siksek 2003] S. Siksek, “On the Diophantine equation x2 = y p + 2kz p”, J. Théor. Nombres Bordeaux 15:3 (2003), 839–846. MR Zbl [Siksek 2012] S. Siksek, “The modular approach to Diophantine equations”, pp. 151–179 in Explicit methods in number theory (Paris, 2004), Panor. Synthèses 36, Soc. Math. France, Paris, 2012. MR Zbl [Smart 1998] N. P. Smart, The algorithmic resolution of Diophantine equations, Lond. Math. Soc. Stud. Texts 41, Cambridge Univ. Press, 1998. MR Zbl [Stein 2007] W. Stein, Modular forms: a computational approach, Grad. Stud. in Math. 79, Amer. Math. Soc., Providence, RI, 2007. MR Zbl [Stoll and Cremona 2003] M. Stoll and J. E. Cremona, “On the reduction theory of binary forms”, J. Reine Angew. Math. 565 (2003), 79–99. MR Zbl [Voutier 2023] P. Voutier, “lfl3-kit”, 2023, available at PARI/GP code. [Wiles 1995] A. Wiles, “Modular elliptic curves and Fermat’s last theorem”, Ann. of Math. (2) 141:3 (1995), 443–551. MR Zbl Communicated by Frank Calegari Received 2021-10-11 Revised 2022-09-22 Accepted 2022-11-28 bennett@math.ubc.ca Department of Mathematics, University of British Columbia, Vancouver, Canada s.siksek@warwick.ac.uk Mathematics Institute, University of Warwick, Coventry, United Kingdom mathematical sciences publishers msp Algebra & Number Theory msp.org/ant EDITORS MANAGING EDITOR Antoine Chambert-Loir Université Paris-Diderot France EDITORIAL BOARD CHAIR David Eisenbud University of California Berkeley, USA BOARD OF EDITORS Jason P. Bell University of Waterloo, Canada Bhargav Bhatt University of Michigan, USA Frank Calegari University of Chicago, USA J-L. Colliot-Thélène CNRS, Université Paris-Saclay, France Brian D. Conrad Stanford University, USA Samit Dasgupta Duke University, USA Hélène Esnault Freie Universität Berlin, Germany Gavril Farkas Humboldt Universität zu Berlin, Germany Sergey Fomin University of Michigan, USA Edward Frenkel University of California, Berkeley, USA Wee Teck Gan National University of Singapore Andrew Granville Université de Montréal, Canada Ben J. Green University of Oxford, UK Christopher Hacon University of Utah, USA Roger Heath-Brown Oxford University, UK János Kollár Princeton University, USA Michael J. Larsen Indiana University Bloomington, USA Philippe Michel École Polytechnique Fédérale de Lausanne Martin Olsson University of California, Berkeley, USA Irena Peeva Cornell University, USA Jonathan Pila University of Oxford, UK Anand Pillay University of Notre Dame, USA Bjorn Poonen Massachusetts Institute of Technology, USA Victor Reiner University of Minnesota, USA Peter Sarnak Princeton University, USA Michael Singer North Carolina State University, USA Vasudevan Srinivas Tata Inst. of Fund. Research, India Shunsuke Takagi University of Tokyo, Japan Pham Huu Tiep Rutgers University, USA Ravi Vakil Stanford University, USA Akshay Venkatesh Institute for Advanced Study, USA Melanie Matchett Wood Harvard University, USA Shou-Wu Zhang Princeton University, USA PRODUCTION production@msp.org Silvio Levy, Scientific Editor See inside back cover or msp.org/ant for submission instructions. The subscription price for 2023 is US $485/year for the electronic version, and $705/year (+$65, if shipping outside the US) for print and electronic. Subscriptions, requests for back issues and changes of subscriber address should be sent to MSP. Algebra & Number Theory (ISSN 1944-7833 electronic, 1937-0652 printed) at Mathematical Sciences Publishers, 798 Evans Hall #3840, c/o University of California, Berkeley, CA 94720-3840 is published continuously online. ANT peer review and production are managed by EditFLOW® from MSP. PUBLISHED BY mathematical sciences publishers nonprofit scientific publishing © 2023 Mathematical Sciences Publishers Algebra & Number Theory Volume 17 No. 10 2023 1681 Special cycles on the basic locus of unitary Shimura varieties at ramified primes YOUSHENG SHI 1715 Hybrid subconvexity bounds for twists of GL(3) × GL(2) L-functions BINGRONG HUANG and ZHAO XU 1753 Separation of periods of quartic surfaces PIERRE LAIREZ and EMRE CAN SERTÖZ 1779 Global dimension of real-exponent polynomial rings NATHAN GEIST and EZRA MILLER 1789 Differences between perfect powers: prime power gaps MICHAEL A. BENNETT and SAMIR SIKSEK 1847 On fake linear cycles inside Fermat varieties JORGE DUQUE FRANCO and ROBERTO VILLAFLOR LOYOLA
189343
https://artofproblemsolving.com/articles/files/KedlayaInequalities.pdf?srsltid=AfmBOoqy6MogSegYZHwGwHlmuBuYTpXXer7NWILudoHTxTeUZDMuBQr4
A < B (A is less than B)Kiran Kedlaya based on notes for the Math Olympiad Program (MOP) Version 1.0, last revised August 2, 1999 c©Kiran S. Kedlaya. This is an unfinished manuscript distributed for personal use only. In particular, any publication of all or part of this manuscript without prior consent of the author is strictly prohibited. Please send all comments and corrections to the author at kedlaya@math.mit.edu . Thank you! Introduction These notes constitute a survey of the theory and practice of inequalities. While their intended audience is high-school students, primarily present and aspiring participants of the Math Olympiad Program (MOP), I hope they prove informative to a wider audience. In particular, those who experience inequalities via the Putnam competition, or via problem columns in such journals as Crux Mathematicorum or the American Mathematical Monthly ,should find some benefit. Having named high-school students as my target audience, I must now turn around and admit that I have not made any effort to keep calculus out of the exposition, for several reasons. First, in certain places, rewriting to avoid calculus would make the exposition a lot more awkward. Second, the calculus I invoke is for the most part pretty basic, mostly properties of the first and second derivative. Finally, it is my experience that many Olympiad participants have studied calculus anyway. In any case, I have clearly flagged uses of calculus in the text, and I’ve included a crash course in calculus (Chapter ?? ) to fill in the details. By no means is this primer a substitute for an honest treatise on inequalities, such as the magnum opus of Hardy, Littlewood and P´ olya [ ?] or its latter-day sequel [ ?], nor for a comprehensive catalog of problems in the area, for which we have Stanley Rabinowitz’ series [?]. My aim, rather than to provide complete information, is to whet the reader’s appetite for this beautiful and boundless subject. Also note that I have given geometric inequalities short shrift, except to the extent that they can be written in an algebraic or trigonometric form. ADD REFERENCE. Thanks to Paul Zeitz for his MOP 1995 notes, upon which these notes are ultimately based. (In particular, they are my source for symmetric sum notation.) Thanks also to the participants of the 1998 and 1999 MOPs for working through preliminary versions of these notes. Caveat solver! It seems easier to fool oneself by constructing a false proof of an inequality than of any other type of mathematical assertion. All it takes is one reversed inequality to turn an apparently correct proof into a wreck. The adage “if it seems too good to be true, it probably is” applies in full force. To impress the gravity of this point upon the reader, we provide a little exercise in 1mathematical proofreading. Of the following X proofs, only Y are correct. Can you spot the fakes? PUT IN THE EXAMPLES. 2Chapter 1 Separable inequalities This chapter covers what I call “separable” inequalities, those which can be put in the form f (x1) + · · · + f (xn) ≥ c for suitably constrained x1, . . . , x n. For example, if one fixes the product, or the sum, of the variables, the AM-GM inequality takes this form, and in fact this will be our first example. 1.1 Smoothing, convexity and Jensen’s inequality The “smoothing principle” states that if you have a quantity of the form f (x1) + · · · + f (xn)which becomes smaller as you move two of the variables closer together (while preserving some constraint, e.g. the sum of the variables), then the quantity is minimized by making the variables all equal. This remark is best illustrated with an example: the famous arithmetic mean and geometric mean (AM-GM) inequality. Theorem 1 (AM-GM). Let x1, . . . , x n be positive real numbers. Then x1 + · · · + xn n ≥ n √x1 · · · xn, with equality if and only if x1 = · · · = xn.Proof. We will make a series of substitutions that preserve the left-hand side while strictly increasing the right-hand side. At the end, the xi will all be equal and the left-hand side will equal the right-hand side; the desired inequality will follow at once. (Make sure that you understand this reasoning before proceeding!) If the xi are not already all equal to their arithmetic mean, which we call a for conve-nience, then there must exist two indices, say i and j, such that xi < a < x j . (If the xi were all bigger than a, then so would be their arithmetic mean, which is impossible; similarly if they were all smaller than a.) We will replace the pair xi and xj by x′ i = a, x ′ j = xi + xj − a;3by design, x′ i and x′ j have the same sum as xi and xj , but since they are closer together, their product is larger. To be precise, a(xi + xj − a) = xixj + ( xj − a)( a − xi) > x ixj because xj − a and a − xi are positive numbers. By this replacement, we increase the number of the xi which are equal to a, preserving the left-hand side of the desired inequality by increasing the right-hand side. As noted initially, eventually this process ends when all of the xi are equal to a, and the inequality becomes equality in that case. It follows that in all other cases, the inequality holds strictly. Note that we made sure that the replacement procedure terminates in a finite number of steps. If we had proceeded more naively, replacing a pair of xi by their arithmetic meaon, we would get an infinite procedure, and then would have to show that the xi were “converging” in a suitable sense. (They do converge, but making this precise would require some additional effort which our alternate procedure avoids.) A strong generalization of this smoothing can be formulated for an arbitrary convex function. Recall that a set of points in the plane is said to be convex if the line segment joining any two points in the set lies entirely within the set. A function f defined on an interval (which may be open, closed or infinite on either end) is said to be convex if the set {(x, y ) ∈ R2 : y ≥ f (x)} is convex. We say f is concave if −f is convex. (This terminology was standard at one time, but today most calculus textbooks use “concave up” and “concave down” for our “convex” and “concave”. Others use the evocative sobriquets “holds water” and “spills water”.) A more enlightening way to state the definition might be that f is convex if for any t ∈ [0 , 1] and any x, y in the domain of f , tf (x) + (1 − t)f (y) ≥ f (tx + (1 − t)y). If f is continuous, it suffices to check this for t = 1 /2. Conversely, a convex function is automatically continuous except possibly at the endpoints of the interval on which it is defined. DIAGRAM. Theorem 2. If f is a convex function, then the following statements hold: 1. If a ≤ b < c ≤ d, then f (c)−f (a) c−a ≤ f (d)−f (b) d−b . (The slopes of secant lines through the graph of f increase with either endpoint.) 2. If f is differentiable everywhere, then its derivative (that is, the slope of the tangent line to the graph of f is an increasing function.) The utility of convexity for proving inequalities comes from two factors. The first factor is Jensen’s inequality, which one may regard as a formal statement of the “smoothing principle” for convex functions. 4Theorem 3 (Jensen). Let f be a convex function on an interval I and let w1, . . . , w n be nonnegative real numbers whose sum is 1. Then for all x1, . . . , x n ∈ I, w1f (x1) + · · · + wnf (xn) ≥ f (w1x1 + · · · + wnxn). Proof. An easy induction on n, the case n = 2 being the second definition above. The second factor is the ease with which convexity can be checked using calculus, namely via the second derivative test. Theorem 4. Let f be a twice-differentiable function on an open interval I. Then f is convex on I if and only if f ′′ (x) ≥ 0 for all x ∈ I. For example, the AM-GM inequality can be proved by noting that f (x) = log x is concave; its first derivative is 1 /x and its second −1/x 2. In fact, one immediately deduces a weighted AM-GM inequality; as we will generalize AM-GM again later, we will not belabor this point. We close this section by pointing out that separable inequalities sometimes concern functions which are not necessarily convex. Nonetheless one can prove something! Example 5 (USA, 1998). Let a0, . . . , a n be numbers in the interval (0 , π/ 2) such that tan( a0 − π/ 4) + tan( a1 − π/ 4) + · · · + tan( an − π/ 4) ≥ n − 1. Prove that tan a0 tan a1 · · · tan an ≥ nn+1 .Solution. Let xi = tan( ai − π/ 4) and yi = tan ai = (1 + xi)/(1 − xi), so that xi ∈ (−1, 1). The claim would follow immediately from Jensen’s inequality if only the function f (x) = log(1 + x)/(1 − x) were convex on the interval ( −1, 1), but alas, it isn’t. It’s concave on (−1, 0] and convex on [0 , 1). So we have to fall back on the smoothing principle. What happens if we try to replace xi and xj by two numbers that have the same sum but are closer together? The contribution of xi and xj to the left side of the desired inequality is 1 + xi 1 − xi · 1 + xj 1 − xj = 1 + 2 xixj+1 xi+xj − 1 . The replacement in question will increase xixj , and so will decrease the above quantity provided that xi + xj > 0. So all we need to show is that we can carry out the smoothing process so that every pair we smooth satisfies this restriction. Obviously there is no problem if all of the xi are positive, so we concentrate on the possibility of having xi < 0. Fortunately, we can’t have more than one negative xi, since x0 + · · · + xn ≥ n − 1 and each xi is less than 1. (So if two were negative, the sum would be at most the sum of the remaining n − 1 terms, which is less than n − 1.) Moreover, if x0 < 0, we could not have x0 + xj < 0 for j = 1 , . . . , n , else we would have the contradiction x0 + · · · + xn ≤ (1 − n)x0 < n − 1. 5Thus x0 + xj > 0 for some j, and we can replace these two by their arithmetic mean. Now all of the xi are positive and smoothing (or Jensen) may continue without further restrictions, yielding the desired inequality. Problems for Section 1.1 Make up a problem by taking a standard property of convex functions, and specializing to a particular function. The less evident it is where your problem came from, the better! 2. Given real numbers x1, . . . , x n, what is the minimum value of |x − x1| + · · · + |x − xn|?3. (via Titu Andreescu) If f is a convex function and x1, x 2, x 3 lie in its domain, then f (x1) + f (x2) + f (x3) + f (x1 + x2 + x3 3 ) ≥ 4 3 [ f (x1 + x2 2 ) f (x2 + x3 2 ) +f (x3 + x1 2 )] . (USAMO 1974/1) For a, b, c > 0, prove that aabbcc ≥ (abc )(a+b+c)/3.5. (India, 1995) Let x1, . . . , x n be n positive numbers whose sum is 1. Prove that x1 √1 − x1 · · · + xn √1 − xn ≥ √ n n − 1. Let A, B, C be the angles of a triangle. Prove that 1. sin A + sin B + sin C ≤ 3√3/2; 2. cos A + cos B + cos C ≤ 3/2; 3. sin A/ 2 sin B/ 2 sin C/ 2 ≤ 1/8; 4. cot A + cot B + cot C ≥ √3 (i.e. the Brocard angle is at most π/ 6). (Beware: not all of the requisite functions are convex everywhere!) 7. (Vietnam, 1998) Let x1, . . . , x n (n ≥ 2) be positive numbers satisfying 1 x1 + 1998 + 1 x2 + 1998 + · · · + 1 xn + 1998 = 1 1998 . 6Prove that n √x1x2 · · · xn n − 1 ≥ 1998 . (Again, beware of nonconvexity.) 8. Let x1, x 2, . . . be a sequence of positive real numbers. If ak and gk are the arithmetic and geometric means, respectively, of x1, . . . , x k, prove that ann gnk ≥ an−1 n−1 gn−1 k (1.1) n(an − gn) ≥ (n − 1)( an−1 − gn−1). (1.2) These strong versions of the AM-GM inequality are due to Rado [ ?, Theorem 60] and Popoviciu [ ?], respectively. (These are just a sample of the many ways the AM-GM inequality can be sharpened, as evidenced by [ ?].) 1.2 Unsmoothing and noninterior extrema A convex function has no interior local maximum. (If it had an interior local maximum at x, then the secant line through ( x − , f (x − )) and ( x + , f (x + )) would lie under the curve at x, which cannot occur for a convex function.) Even better, a function which is linear in a given variable, as the others are held fixed, attains no extrema in either direction except at its boundary values. Problems for Section 1.2 (IMO 1974/5) Determine all possible values of S = a a + b + d + b a + b + c + c b + c + d + d a + c + d where a, b, c, d are arbitrary positive numbers. 2. (Bulgaria, 1995) Let n ≥ 2 and 0 ≤ xi ≤ 1 for all i = 1 , 2, . . . , n . Show that (x1 + x2 + · · · + xn) − (x1x2 + x2x3 + · · · + xnx1) ≤ ⌊n 2 ⌋ , and determine when there is equality. 1.3 Discrete smoothing The notions of smoothing and convexity can also be applied to functions only defined on integers. 7Example 6. How should n balls be put into k boxes to minimize the number of pairs of balls which lie in the same box? Solution. In other words, we want to minimize ∑ki=1 (ni 2 ) over sequences n1, . . . , n k of non-negative integers adding up to n.If ni − nj ≥ 2 for some i, j , then moving one ball from i to j decreases the number of pairs in the same box by (ni 2 ) − (ni − 12 ) + (nj 2 ) − (nj + 1 2 ) = ni − nj − 1 > 0. Thus we minimize the number of pairs by making sure no two boxes differ by more than one ball; one can easily check that the boxes must each contain bn/k c or dn/k e balls. Problems for Section 1.3 (Germany, 1995) Prove that for all integers k and n with 1 ≤ k ≤ 2n, (2n + 1 k − 1 ) + (2n + 1 k + 1 ) ≥ 2 · n + 1 n + 2 · (2n + 1 k ) . (Arbelos) Let a1, a 2, . . . be a convex sequence of real numbers, which is to say ak−1 + ak+1 ≥ 2ak for all k ≥ 2. Prove that for all n ≥ 1, a1 + a3 + · · · + a2n+1 n + 1 ≥ a2 + a4 + · · · + a2n n . (USAMO 1993/5) Let a0, a 1, a 2, . . . be a sequence of positive real numbers satisfying ai−1ai+1 ≤ a2 i for i = 1 , 2, 3, . . . . (Such a sequence is said to be log concave .) Show that for each n ≥ 1, a0 + · · · + an n + 1 · a1 + · · · + an−1 n − 1 ≥ a0 + · · · + an−1 n · a1 + · · · + an n . (MOP 1997) Given a sequence {xn}∞ n=0 with xn > 0 for all n ≥ 0, such that the sequence {anxn}∞ n=0 is convex for all a > 0, show that the sequence {log xn}∞ n=0 is also convex. 5. How should the numbers 1 , . . . , n be arranged in a circle to make the sum of the products of pairs of adjacent numbers as large as possible? As small as possible? 8Chapter 2 Symmetric polynomial inequalities This section is a basic guide to polynomial inequalities, which is to say, those inequalities which are in (or can be put into) the form P (x1, . . . , x n) ≥ 0 with P a symmetric polynomial in the real (or sometimes nonnegative) variables x1, . . . , x n. 2.1 Introduction to symmetric polynomials Many inequalities express some symmetric relationship between a collection of numbers. For this reason, it seems worthwhile to brush up on some classical facts about symmetric polynomials. For arbitrary x1, . . . , x n, the coefficients c1, . . . , c n of the polynomial (t + x1) · · · (t + xn) = tn + c1tn−1 + · · · + cn−1t + cn are called the elementary symmetric functions of the xi. (That is, ck is the sum of the products of the xi taken k at a time.) Sometimes it proves more convenient to work with the symmetric averages di = ci (ni ). For notational convenience, we put c0 = d0 = 1 and ck = dk = 0 for k > n .Two basic inequalities regarding symmetric functions are the following. (Note that the second follows from the first.) Theorem 7 (Newton). If x1, . . . , x n are nonnegative, then d2 i ≥ di−1di+1 i = 1 , . . . , n − 1. Theorem 8 (Maclaurin). If x1, . . . , x n are positive, then d1 ≥ d1/22 ≥ · · · ≥ d1/n n , with equality if and only if x1 = · · · = xn. 9These inequalities and many others can be proved using the following trick. Theorem 9. Suppose the inequality f (d1, . . . , d k) ≥ 0 holds for all real (resp. positive) x1, . . . , x n for some n ≥ k. Then it also holds for all real (resp. positive) x1, . . . , x n+1 .Proof. Let P (t) = ( t + x1) · · · (t + xn+1 ) = n+1 ∑ i=0 (n + 1 i ) ditn+1 −i be the monic polynomial with roots −x1, . . . , −xn. Recall that between any two zeros of a differentiable function, there lies a zero of its derivative (Rolle’s theorem). Thus the roots of P ′(t) are all real if the xi are real, and all negative if the xi are positive. Since P ′(t) = n+1 ∑ i=0 (n + 1 − i) (n + 1 i ) ditn−i = ( n + 1) n ∑ i=0 (ni ) ditn−i, we have by assumption f (d1, . . . , d k) ≥ 0. Incidentally, the same trick can also be used to prove certain polynomial identities. Problems for Section 2.1 Prove that every symmetric polynomial in x1, . . . , x n can be (uniquely) expressed as a polynomial in the elementary symmetric polynomials. 2. Prove Newton’s and Maclaurin’s inequalities. 3. Prove Newton’s identities: if si = xi 1 · · · + xin, then c0sk + c1sk−1 + · · · + ck−1s1 + kc k = 0 . (Hint: first consider the case n = k.) 4. (Hungary) Let f (x) = xn + an−1xn−1 + · · · + a1x + 1 be a polynomial with non-negative real coefficients and n real roots. Prove that f (x) ≥ (1 + x)n for all x ≥ 0. 5. (Gauss-??) Let P (z) be a polynomial with complex coefficients. Prove that the (com-plex) zeroes of P ′(z) all lie in the convex hull of the zeroes of P (z). Deduce that if S is a convex subset of the complex plane (e.g., the unit disk), and if <f (d1, . . . , d k) ≥ 0holds for all x1, . . . , x n ∈ S for some n ≥ k, then the same holds for x1, . . . , x n+1 ∈ S.6. Prove Descartes’ Rule of Signs: let P (x) be a polynomial with real coefficients written as P (x) = ∑ aki xki , where all of the aki are nonzero. Prove that the number of positive roots of P (x), counting multiplicities, is equal to the number of sign changes (the number of i such that aki−1 aki < 0) minus a nonnegative even integer. (For negative roots, apply the same criterion to P (−x).) 10 2.2 The idiot’s guide to homogeneous inequalities Suppose one is given a homogeneous symmetric polynomial P and asked to prove that P (x1, . . . , x n) ≥ 0. How should one proceed? Our first step is purely formal, but may be psychologically helpful. We introduce the following notation: ∑ sym Q(x1, . . . , x n) = ∑ σ Q(xσ(1) , . . . , x σ(n))where σ runs over all permutations of 1 , . . . , n (for a total of n! terms). For example, if n = 3, and we write x, y, z for x1, x 2, x 3, ∑ sym x3 = 2 x3 + 2 y3 + 2 z3 ∑ sym x2y = x2y + y2z + z2x + x2z + y2x + z2y ∑ sym xyz = 6 xyz. Using symmetric sum notation can help prevent errors, particularly when one begins with rational functions whose denominators must first be cleared. Of course, it is always a good algebra check to make sure that equality continues to hold when it’s supposed to. In passing, we note that other types of symmetric sums can be useful when the inequal-ities in question do not have complete symmetry, most notably cyclic summation ∑ cyclic x2y = x2y + y2z + z2x. However, it is probably a bad idea to mix, say, cyclic and symmetric sums in the same calculation! Next, we attempt to “bunch” the terms of P into expressions which are nonnegative for a simple reason. For example, ∑ sym (x3 − xyz ) ≥ 0by the AM-GM inequality, while ∑ sym (x2y2 − x2yz ) ≥ 0by a slightly jazzed-up but no more sophisticated argument: we have x2y2 + x2z2 ≥ x2yz again by AM-GM. We can formalize what we are doing using the notion of majorization. If s = ( s1, . . . , s n)and t = ( t1, . . . , t n) are two nonincreasing sequences, we say that s majorizes t if s1+· · · +sn = t1 + · · · + tn and s1 + · · · + si ≥ t1 + · · · + ti for i = 1 , . . . , n .11 Theorem 10 (“Bunching”). If s and t are sequences of nonnegative reals such that s majorizes t, then ∑ sym xs1 1 · · · xsn n ≥ ∑ sym xt1 1 · · · xtn n . Proof. One first shows that if s majorizes t, then there exist nonnegative constants kσ, as σ ranges over the permutations of {1, . . . , n }, whose sum is 1 and such that ∑ σ kσ(sσ1 , . . . , s σn ) = ( t1, . . . , t n)(and conversely). Then apply weighted AM-GM as follows: ∑ σ xsσ(n) 1 · · · xsσ(n) n = ∑ σ,τ kτ xsσ(τ (1)) 1 · · · xsσ(τ (n)) n ≥ ∑ σ x P τkτsσ(τ(1)) 1 · · · x P τkτsσ(τ(n)) n = ∑ σ xtσ(1) 1 · · · xtσ(n) n . If the indices in the above proof are too bewildering, here’s an example to illustrate what’s going on: for s = (5 , 2, 1) and t = (3 , 3, 2), we have (3 , 3, 2) = (5 , 2, 1)+ and so BLAH. Example 11 (USA, 1997). Prove that, for all positive real numbers a, b, c , 1 a3 + b3 + abc + 1 b3 + c3 + abc + 1 c3 + a3 + abc ≤ 1 abc . Solution. Clearing denominators and multiplying by 2, we have ∑ sym (a3 + b3 + abc )( b3 + c3 + abc )abc ≤ 2( a3 + b3 + abc )( b3 + c3 + abc )( c3 + a3 + abc ), which simplifies to ∑ sym a7bc + 3 a4b4c + 4 a5b2c2 + a3b3c3 ≤ ∑ sym a3b3c3 + 2 a6b3 + 3 a4b4c + 2 a5b2c2 + a7bc, and in turn to ∑ sym 2a6b3 − 2a5b2c2 ≥ 0, which holds by bunching. 12 In this case we were fortunate that after slogging through the algebra, the resulting symmetric polynomial inequality was quite straightforward. Alas, there are cases where bunching will not suffice, but for these we have the beautiful inequality of Schur. Note the extra equality condition in Schur’s inequality; this is a much stronger result than AM-GM, and so cannot follow from a direct AM-GM argument. In general, one can avoid false leads by remembering that all of the steps in the proof of a given inequality must have equality conditions at least as permissive as those of the desired result! Theorem 12 (Schur). Let x, y, z be nonnegative real numbers. Then for any r > 0, xr(x − y)( x − z) + yr(y − z)( y − x) + zr(z − x)( z − y) ≥ 0. Equality holds if and only if x = y = z, or if two of x, y, z are equal and the third is zero. Proof. Since the inequality is symmetric in the three variables, we may assume without loss of generality that x ≥ y ≥ z. Then the given inequality may be rewritten as (x − y)[ xr(x − z) − yr(y − z)] + zr(x − z)( y − z) ≥ 0, and every term on the left-hand side is clearly nonnegative. Keep an eye out for the trick we just used: creatively rearranging a polynomial into the sum of products of obviously nonnegative terms. The most commonly used case of Schur’s inequality is r = 1, which, depending on your notation, can be written 3d31 + d3 ≥ 4d1d2 or ∑ sym x3 − 2x2y + xyz ≥ 0. If Schur is invoked with no mention r, you should assume r = 1. Example 13. (Japan, 1997) (Japan, 1997) Let a, b, c be positive real numbers, Prove that (b + c − a)2 (b + c)2 + a2 + (c + a − b)2 (c + a)2 + b2 + (a + b − c)2 (a + b)2 + c2 ≥ 3 5 . Solution. We first simplify slightly, and introduce symmetric sum notation: ∑ sym 2ab + 2 ac a2 + b2 + c2 + 2 bc ≤ 24 5 . Writing s = a2 + b2 + c2, and clearing denominators, this becomes 5s2 ∑ sym ab + 10 s ∑ sym a2bc + 20 ∑ sym a3b2c ≤ 6s3 + 6 s2 ∑ sym ab + 12 s ∑ sym a2bc + 48 a2b2c2 13 which simplifies a bit to 6s3 + s2 ∑ sym ab + 2 s ∑ sym a2bc + 8 ∑ sym a2b2c2 ≥ 10 s ∑ sym a2bc + 20 ∑ sym a3b2c. Now we multiply out the powers of s: ∑ sym 3a6 + 2 a5b − 2a4b2 + 3 a4bc + 2 a3b3 − 12 a3b2c + 4 a2b2c2 ≥ 0. The trouble with proving this by AM-GM alone is the a2b2c2 with a positive coefficient, since it is the term with the most evenly distributed exponents. We save face using Schur’s inequality (multiplied by 4 abc :) ∑ sym 4a4bc − 8a3b2c + 4 a2b2c2 ≥ 0, which reduces our claim to ∑ sym 3a6 + 2 a5b − 2a4b2 − a4bc + 2 a3b3 − 4a3b2c ≥ 0. Fortunately, this is a sum of four expressions which are nonnegative by weighted AM-GM: 0 ≤ 2 ∑ sym (2 a6 + b6)/3 − a4b2 0 ≤ ∑ sym (4 a6 + b6 + c6)/6 − a4bc 0 ≤ 2 ∑ sym (2 a3b3 + c3a3)/3 − a3b2c 0 ≤ 2 ∑ sym (2 a5b + a5c + ab 5 + ac 5)/6 − a3b2c. Equality holds in each case of AM-GM, and in Schur, if and only if a = b = c. Problems for Section 2.2 Suppose r is an even integer. Show that Schur’s inequality still holds when x, y, z are allowed to be arbitrary real numbers (not necessarily positive). 2. (Iran, 1996) Prove the following inequality for positive real numbers x, y, z :(xy + yz + zx ) ( 1 (x + y)2 + 1 (y + z)2 + 1 (z + x)2 ) ≥ 9 4 . 14 3. The author’s solution to the USAMO 1997 problem also uses bunching, but in a subtler way. Can you find it? (Hint: try replacing each summand on the left with one that factors.) 4. (MOP 1998) 5. Prove that for x, y, z > 0, x (x + y)( x + z) + y (y + z)( y + x) + z (z + x)( z + y) ≤ 9 4( x + y + z) . 2.3 Variations: inhomogeneity and constraints One can complicate the picture from the previous section in two ways: one can make the polynomials inhomogeneous, or one can add additional constraints. In many cases, one can reduce to a homogeneous, unconstrained inequality by creative manipulations or substitu-tions; we illustrate this process here. Example 14 (IMO 1995/2). Let a, b, c be positive real numbers such that abc = 1 . Prove that 1 a3(b + c) + 1 b3(c + a) + 1 c3(a + b) ≥ 3 2 . Solution. We eliminate both the nonhomogeneity and the constraint by instead proving that 1 a3(b + c) + 1 b3(c + a) + 1 c3(a + b) ≥ 3 2( abc )4/3 . This still doesn’t look too appetizing; we’d prefer to have simpler denominators. So we make the additional substitution a = 1 /x, b = 1 /y, c = 1 /z a = x/y , b = y/z , c = z/x , in which case the inequality becomes x2 y + z + y2 z + x + z2 x + y ≥ 3 2 (xyz )1/3. (2.1) Now we may follow the paradigm: multiply out and bunch. We leave the details to the reader. (We will revisit this inequality several times later.) On the other hand, sometimes more complicated maneuvers are required, as in this offbeat example. Example 15 (Vietnam, 1996). Let a, b, c, d be four nonnegative real numbers satisfying the conditions 2( ab + ac + ad + bc + bd + cd ) + abc + abd + acd + bcd = 16 . Prove that a + b + c + d ≥ 2 3 (ab + ac + ad + bc + bd + cd ) and determine when equality holds. 15 Solution (by Sasha Schwartz). Adopting the notation from the previous section, we want to show that 3d2 + d3 = 4 = ⇒ d1 ≥ d2. Assume on the contrary that we have 3 d2 + d3 = 4 but d1 < d 2. By Schur’s inequality plus Theorem ?? , we have 3d31 + d3 ≥ 4d1d2. Substituting d3 = 4 − 3d2 gives 3d31 + 4 ≥ (4 d1 + 3) d2 > 4d21 + 3 d1, which when we collect and factor implies (3 d1 − 4)( d21 − 1) > 0. However, on one hand 3d1 − 4 < 3d2 − 4 = −d3 < 0; on the other hand, by Maclaurin’s inequality d21 ≥ d2 > d 1, so d1 > 1. Thus (3 d1 − 4)( d21 − 1) is negative, a contradiction. As for equality, we see it implies (3 d1 − 4)( d21 − 1) = 0 as well as equality in Maclaurin and Schur, so d1 = d2 = d3 = 1. Problems for Section 2.3 (IMO 1984/1) Prove that 0 ≤ yz + zx + xy − 2xyz ≤ 7/27, where x, y and z are non-negative real numbers for which x + y + z = 1. 2. (Ireland, 1998) Let a, b, c be positive real numbers such that a + b + c ≥ abc . Prove that a2 + b2 + c2 ≥ abc . (In fact, the right hand side can be improved to √3abc .) 3. (Bulgaria, 1998) Let a, b, c be positive real numbers such that abc = 1. Prove that 1 1 + a + b + 1 1 + b + c + 1 1 + c + a ≤ 1 2 + a + 1 2 + b + 1 2 + c. 2.4 Substitutions, algebraic and trigonometric Sometimes a problem can be simplified by making a suitable substition. In particular, this technique can often be used to simplify unwieldy constraints. One particular substitution occurs frequently in problems of geometric origin. The con-dition that the numbers a, b, c are the sides of a triangle is equivalent to the constraints a + b > c, b + c > a, c + a > b coming from the triangle inequality. If we let x = ( b + c − a)/2, y = ( c + a − b)/2, z =(a + b − c)/2, then the constraints become x, y, z > 0, and the original variables are also easy to express: a = y + z, b = z + x, c = x + y. A more exotic substitution can be used when the constraint a + b + c = abc is given. Put α = arctan a, β = arctan b, γ = arctan c;16 then α + β + γ is a multiple of π. (If a, b, c are positive, one can also write them as the cotangents of three angles summing to π/ 2.) Problems for Section 2.4 (IMO 1983/6) Let a, b, c be the lengths of the sides of a triangle. Prove that a2b(a − b) + b2c(b − c) + c2a(c − a) ≥ 0, and determine when equality occurs. 2. (Asian Pacific, 1996) Let a, b, c be the lengths of the sides of a triangle. Prove that √a + b − c + √b + c − a + √c + a − b ≤ √a + √b + √c. (MOP, 1999) Let a, b, c be lengths of the the sides of a triangle. Prove that a3 + b3 + c3 − 3abc ≥ 2 max {| a − b|3, |b − c|3, |c − a|3}. (Arbelos) Prove that if a, b, c are the sides of a triangle and 2( ab 2 + bc 2 + ca 2) = a2b + b2c + c2a + 3 abc, then the triangle is equilateral. 5. (Korea, 1998) For positive real numbers a, b, c with a + b + c = abc , show that 1 √1 + a2 + 1 √1 + b2 + 1 √1 + c2 ≤ 3 2 , and determine when equality occurs. (Try proving this by dehomogenizing and you’ll appreciate the value of the trig substitution!) 17 Chapter 3 The toolbox In principle, just about any inequality can be reduced to the basic principles we have outlined so far. This proves to be fairly inefficient in practice, since once spends a lot of time repeating the same reduction arguments. More convenient is to invoke as necessary some of the famous classical inequalities described in this chapter. We only barely scratch the surface here of the known body of inequalities; already [ ?]provides further examples, and [ ?] more still. 3.1 Power means The power means constitute a simultaneous generalization of the arithmetic and geometric means; the basic inequality governing them leads to a raft of new statements, and exposes a symmetry in the AM-GM inequality that would not otherwise be evident. For any real number r 6 = 0, and any positive reals x1, . . . , x n, we define the r-th power mean of the xi as M r(x1, . . . , x n) = (xr 1 · · · + xrn n )1/r . More generally, if w1, . . . , w n are positive real numbers adding up to 1, we may define the weighted r-th power mean M rw(x1, . . . , x n) = ( w1xr 1 · · · + wnxrn)1/r . Clearly this quantity is continuous as a function of r (keeping the xi fixed), so it makes sense to define M 0 as lim r→0 M rw = lim r→0 (1 r exp log( w1xr 1 · · · + wnxrn ) = exp d dr ∣∣∣∣r=0 log( w1xr 1 · · · + wnxrn)= exp w1 log x1 + · · · + wn log xn w1 + · · · + wn = xw1 1 · · · xwn n 18 or none other than the weighted geometric mean of the xi. Theorem 16 (Power mean inequality). If r > s , then M rw(x1, . . . , x n) ≥ M sw(x1, . . . , x n) with equality if and only if x1 = · · · = xn.Proof. Everything will follow from the convexity of the function f (x) = xr for r ≥ 1 (its second derivative is r(r − 1) xr−2), but we have to be a bit careful with signs. Also, we’ll assume neither r nor s is nonzero, as these cases can be deduced by taking limits. First suppose r > s > 0. Then Jensen’s inequality for the convex function f (x) = xr/s applied to xs 1 , . . . , x sn gives w1xr 1 · · · + wnxrn ≥ (w1xs 1 · · · + wnxsn)r/s and taking the 1 /r -th power of both sides yields the desired inequality. Now suppose 0 > r > s . Then f (x) = xr/s is concave, so Jensen’s inequality is reversed; however, taking 1 /r -th powers reverses the inequality again. Finally, in the case r > 0 > s , f (x) = xr/s is again convex, and taking 1 /r -th powers preserves the inequality. (Or this case can be deduced from the previous ones by comparing both power means to the geometric mean.) Several of the power means have specific names. Of course r = 1 yields the arithmetic mean, and we defined r = 0 as the geometric mean. The case r = −1 is known as the harmonic mean , and the case r = 2 as the quadratic mean or root mean square . Problems for Section 3.1 (Russia, 1995) Prove that for x, y > 0, x x4 + y2 + y y4 + x2 ≤ 1 xy . (Romania, 1996) Let x1, . . . , x n+1 be positive real numbers such that x1 + · · · + xn = xn+1 . Prove that n ∑ i=1 √xi(xn+1 − xi) ≤ √√√√ n∑ i=1 xn+1 (xn+1 − xi). (Poland, 1995) For a fixed integer n ≥ 1 compute the minimum value of the sum x1 + x22 2 + · · · + xnn n given that x1, . . . , x n are positive numbers subject to the condition 1 x1 · · · + 1 xn = n. 19 4. Let a, b, c, d ≥ 0. Prove that 1 a + 1 b + 4 c + 16 d ≥ 64 a + b + c + d. Extend the Rado/Popoviciu inequalities to power means by proving that M rw(x1, . . . , x n)k − M sw(x1, . . . , x n)k wn ≥ M rw′ (x1, . . . , x n−1)k − M sw′ (x1, . . . , x n−1)k w′ n−1 whenever r ≥ k ≥ s. Here the weight vector w′ = ( w′ 1 , . . . , w ′ n−1 ) is given by w′ i = wi/(w1 + · · · + wn−1). (Hint: the general result can be easily deduced from the cases k = r, s .) 6. Prove the following “converse” to the power mean inequality: if r > s > 0, then (∑ i xri )1/r ≤ (∑ i xsi )1/s . 3.2 Cauchy-Schwarz, H¨ older and Minkowski inequali-ties This section consists of three progressively stronger inequalities, beginning with the simple but versatile Cauchy-Schwarz inequality. Theorem 17 (Cauchy-Schwarz). For any real numbers a1, . . . , a n, b1, . . . , b n, (a21 + · · · + a2 n )( b21 + · · · + b2 n ) ≥ (a1b1 + · · · + anbn)2, with equality if the two sequences are proportional. Proof. The difference between the two sides is ∑ i<j (aibj − aj bi)2 and so is nonnegative, with equality iff aibj = aj bi for all i, j . The “sum of squares” trick used here is an important one; look for other opportunities to use it. A clever example of the use of Cauchy-Schwarz is the proposer’s solution of Example ?? , in which the xyz -form of the desired equation is deduced as follows: start with the inequality ( x2 y + z + y2 z + x + z2 x + y ) (( y + z) + ( z + x) + ( x + y)) ≥ (x + y + z)2 which follows from Cauchy-Schwarz, cancel x + y + z from both sides, then apply AM-GM to replace x + y + z on the right side with 3( xyz )1/3.A more flexible variant of Cauchy-Schwarz is the following. 20 Theorem 18 (H¨ older). Let w1, . . . , w n be positive real numbers whose sum is 1. For any positive real numbers aij , n∏ i=1 ( m∑ j=1 aij )wi ≥ m ∑ j=1 n ∏ i=1 awi ij . Proof. By induction, it suffices to do the case n = 2, in which case we’ll write w1 = 1 /p and w2 = 1 /q . Also without loss of generality, we may rescale the aij so that ∑mj=1 aij = 1 for i = 1 , 2. In this case, we need to prove 1 ≥ m ∑ j=1 a1/p 1j a1/q 2j . On the other hand, by weighted AM-GM, a1j p + a2j q ≥ a1/p 1j a1/q 2j and the sum of the left side over j is 1 /p + 1 /q = 1, so the proof is complete. (The special case of weighted AM-GM we just used is sometimes called Young’s inequality.) Cauchy-Schwarz admits a geometric interpretation, as the triangle inequality for vectors in n-dimensional Euclidean space: √ x21 + · · · + x2 n + √ y21 + · · · + y2 n ≥ √(x1 + y1)2 + · · · + ( xn + yn)2. One can use this interpretation to prove Cauchy-Schwarz by reducing it to the case in two variables, since two nonparallel vectors span a plane. Instead, we take this as a starting point for our next generalization of Cauchy-Schwarz (which will be the case r = 2 , s = 1 of Minkowski’s inequality). Theorem 19 (Minkowski). Let r > s be nonzero real numbers. Then for any positive real numbers aij ,  m ∑ j=1 ( n∑ i=1 arij )s/r  1/s ≥  n ∑ i=1 ( m∑ j=1 asij )r/s  1/r . Minkowski’s theorem may be interpreted as a comparison between taking the r-th power mean of each row of a matrix, then taking the s-th power mean of the results, versus taking the s-th power mean of each column, then taking the r-th power mean of the result. If r > s ,the former result is larger, which should not be surprising since there we only use the smaller s-th power mean operation once. This interpretation also tells us what Minkowski’s theorem should say in case one of r, s is zero. The result is none other than H¨ older’s inequality! 21 Proof. First suppose r > s > 0. Write L and R for the left and right sides, respectively, and for convenience, write ti = ∑mj=1 asij . Then Rr = n ∑ i=1 tr/s i = n ∑ i=1 titr/s −1 i = m ∑ j=1 ( n∑ i=1 asij tr/s −1 i ) ≤ m ∑ j=1 ( n∑ i=1 arij )s/r ( n∑ i=1 tr/s i )1−s/r = LsRr−s, where the one inequality is a (term-by-term) invocation of H¨ older. Next suppose r > 0 > s . Then the above proof carries through provided we can prove a certain variation of H¨ older’s inequality with the direction reversed. We have left this to you (Problem ?? ). Finally suppose 0 > r > s . Then replacing aij by 1 /a ij for all i, j turns the desired inequality into another instance of Minkowski, with r and s replaced by −s and −r. This instance follows from what we proved above. Problems for Section 3.2 (Iran, 1998) Let x, y, z be real numbers not less than 1, such that 1 /x + 1 /y + 1 /z = 2. Prove that √x + y + z ≥ √x − 1 + √y − 1 + √z − 1. Complete the proof of Minkowski’s inequality by proving that if k < 0 and a1, . . . , a n, b1, . . . , b n > 0, then ak 1 b1−k 1 · · · + aknb1−kn ≥ (a1 + · · · + an)k(b1 + · · · + bn)1−k. (Hint: reduce to H¨ older.) 3. Formulate a weighted version of Minkowski’s inequality, with one weight corresponding to each aij . Then show that this apparently stronger result follows from Theorem ?? !4. (Titu Andreescu) Let P be a polynomial with positive coefficients. Prove that if P ( 1 x ) ≥ 1 P (x)holds for x = 1, then it holds for every x > 0. 22 5. Prove that for w, x, y, z ≥ 0, w6x3 + x6y3 + y6z3 + z6x3 ≥ w2x5y2 + x2y5z2 + y2z5w2 + z2x5y2. (This can be done either with H¨ older or with weighted AM-GM; try it both ways.) 6. (China) Let ri, s i, t i, u i, v i be real numbers not less than 1, for i = 1 , 2, . . . , n , and let R, S, T, U, V be the respective arithmetic means of the ri, s i, t i, u i, v i. Prove that n ∏ i=1 risitiuivi + 1 risitiuivi − 1 ≤ ( RST U V + 1 RST U V − 1 )n . (Hint: use H¨ older and the concavity of f (x) = ( x5 − 1) /(x5 + 1). 7. (IMO 1992/5) Let S be a finite set of points in three-dimensional space. Let Sx, S y, S z be the orthogonal projections of S onto the yz , zx , xy planes, respectively. Show that |S|2 ≤ | Sx|| Sy|| Sz |. (Hint: Use Cauchy-Schwarz to prove the inequality (∑ i,j,k aij bjk cki )2 ≤ ∑ i,j a2 ij ∑ j,k b2 jk ∑ k,i c2 ki . Then apply this with each variable set to 0 or 1.) 3.3 Rearrangement, Chebyshev and “arrange in or-der” Our next theorem is in fact a triviality that every businessman knows: you make more money by selling most of your goods at a high price and a few at a low price than vice versa. Nonetheless it can be useful! (In fact, an equivalent form of this theorem was on the 1975 IMO.) Theorem 20 (Rearrangement). Given two increasing sequences x1, . . . , x n and y1, . . . , y n of real numbers, the sum n∑ i=1 xiyσ(i), for σ a permutation of {1, . . . , n }, is maximized when σ is the identity permutation and minimized when σ is the reversing permutation. 23 Proof. For n = 2, the inequality ac + bd ≥ ad + bc is equivalent, after collecting terms and factoring, to ( a − b)( c − d) ≥ 0, and so holds when a ≥ b and c ≥ d. We leave it to the reader to deduce the general case from this by successively swapping pairs of the yi.Another way to state this argument involves “partial summation” (which the calculus-equipped reader will recognize as analogous to doing integration by parts). Let si = yσ(i) + · · · + yσ(n) for i = 1 , . . . , n , and write n ∑ i=1 xiyσ(i) = x1s1 + n ∑ j=2 (xj − xj−1)sj . The desired result follows from this expression and the inequalities x1 + · · · + xn−j+1 ≤ sj ≤ xj + · · · + xn, in which the left equality holds for σ equal to the reversing permutation and the right equality holds for σ equal to the identity permutation. One important consequence of rearrangement is Chebyshev’s inequality. Theorem 21 (Chebyshev). Let x1, . . . , x n and y1, . . . , y n be two sequences of real numbers, at least one of which consists entirely of positive numbers. Assume that x1 < · · · < x n and y1 < · · · < y n. Then x1 + · · · + xn n · y1 + · · · + yn n ≤ x1y1 + · · · + xnyn n . If instead we assume x1 > · · · > x n and y1 < · · · < y n, the inequality is reversed. Proof. Apply rearrangement to x1, x 2, . . . , x n and y1, . . . , y n, but repeating each number n times. The great power of Chebyshev’s inequality is its ability to split a sum of complicated terms into two simpler sums. We illustrate this ability with yet another solution to IMO 1995/5. Recall that we need to prove x2 y + z + y2 z + x + z2 x + y ≥ x + y + z 2 . Without loss of generality, we may assume x ≤ y ≤ z, in which case 1 /(y + z) ≤ 1/(z + x) ≤ 1/(x + y). Thus we may apply Chebyshev to deduce x2 y + z + y2 z + x + z2 x + y ≥ x2 + y2 + z2 3 · ( 1 y + z + 1 z + x + 1 x + y ) . By the Power Mean inequality, x2 + y2 + z2 3 ≥ (x + y + z 3 )2 1 3 ( 1 y + z + 1 z + x + 1 x + y ) ≥ 3 x + y + z 24 and these three inequalities constitute the proof. The rearrangement inequality, Chebyshev’s inequality, and Schur’s inequality are all general examples of the “arrange in order” principle: when the roles of several variables are symmetric, it may be profitable to desymmetrize by assuming (without loss of generality) that they are sorted in a particular order by size. Problems for Section 3.3 Deduce Schur’s inequality from the rearrangement inequality. 2. For a, b, c > 0, prove that aabbcc ≥ abbcca.3. (IMO 1971/1) Prove that the following assertion is true for n = 3 and n = 5 and false for every other natural number n > 2: if a1, . . . , a n are arbitrary real numbers, then n ∑ i=1 ∏ j6=i (ai − aj ) ≥ 0. (Proposed for 1999 USAMO) Let x, y, z be real numbers greater than 1. Prove that xx2+2 yz yy2+2 zx zz2+2 xy ≥ (xyz )xy +yz +zx . 3.4 Bernoulli’s inequality The following quaint-looking inequality occasionally comes in handy. Theorem 22 (Bernoulli). For r > 1 and x > −1 (or r an even integer and x any real), (1 + x)r ≥ 1 + xr .Proof. The function f (x) = (1+ x)r has second derivative r(r −1)(1+ x)r−2 and so is convex. The claim is simply the fact that this function lies above its tangent line at x = 0. Problems for Section 3.4 (USA, 1992) Let a = ( mm+1 + nn+1 )/(mm + nn) for m, n positive integers. Prove that am + an ≥ mm + nn. (The problem came with the hint to consider the ratio (aN − N N )/(a − N ) for real a ≥ 0 and integer N ≥ 1. On the other hand, you can prove the claim for real m, n ≥ 1 using Bernoulli.) 25 Chapter 4 Calculus To some extent, I regard this section, as should you, as analogous to sex education in school– it doesn’t condone or encourage you to use calculus on Olympiad inequalities, but rather seeks to ensure that if you insist on using calculus, that you do so properly. Having made that disclaimer, let me turn around and praise calculus as a wonderful discovery that immeasurably improves our understanding of real-valued functions. If you haven’t encountered calculus yet in school, this section will be a taste of what awaits you—but no more than that; our treatment is far too compressed to give a comprehensive exposition. After all, isn’t that what textbooks are for? Finally, let me advise the reader who thinks he/she knows calculus already not to breeze too this section too quickly. Read the text and make sure you can write down the proofs requested in the exercises. 4.1 Limits, continuity, and derivatives The definition of limit attempts to formalize the idea of evaluating a function at a point where it might not be defined, by taking values at points “infinitely close” to the given point. Capturing this idea in precise mathematical language turned out to be a tricky business which remained unresolved until long after calculus had proven its utility in the hands of Newton and Leibniz; the definition we use today was given by Weierstrass. Let f be a function defined on an open interval, except possibly at a single point t. We say the limit of f at t exists and equals L if for each positive real , there exists a positive real δ (depending on , of course) such that 0 < |x − t| < δ =⇒ | f (x) − L| < . We notate this lim x→t f (x) = L. If f is also defined at t and f (t) = L, we say f is continuous at t.Most common functions (polynomials, sine, cosine, exponential, logarithm) are contin-uous for all real numbers (the logarithm only for positive reals), and those which are not (rational functions, tangent) are continuous because they go to infinity at certain points, so 26 the limits in question do not exist. The first example of an “interesting” limit occurs in the definition of the derivative. Again, let f be a function defined on an open interval and t a point in that interval; this time we require that f (t) is defined as well. If the limit lim x→t f (x) − f (t) x − t exists, we call that the derivative of f at t and notate it f ′(t). We also say that f is differentiable at t (the process of taking the derivative is called differentiation ). For example, if f (x) = x2, then for x 6 = t, ( f (x) − f (t)) /(x − t) = x + t, and so the limit exists and f ′(t) = 2 t. More generally, you will show in the exercises that for f (x) = xn, f ′(t) = nt n−1. On the other hand, for other functions, e.g. f (x) = sin x, the difference quotient cannot be expressed in closed form, so certain inequalities must be established in order to calculate the derivative (as will also take place in the exercises). The derivative can be geometrically interpreted as the slope of the tangent line to the graph of f at the point ( t, f (t)). This is because the tangent line to a curve can be regarded as the limit of secant lines, and the quantity ( f (x + t) − f (x)) /t is precisely the slope of the secant line through ( x, f (x)) and ( x + t, f (x + t)). (The first half of this sentence is entirely unrigorous, but it can be rehabilitated.) It is easy to see that a differentiable function must be continuous, but the converse is not true; the function f (x) = |x| (absolute value) is continuous but not differentiable at x = 0. An important property of continuous functions, though one we will not have direct need for here, is the intermediate value theorem. This theorem says the values of a continuous function cannot “jump”, as the tangent function does near π/ 2. Theorem 23. Suppose the function f is continuous on the closed interval [a, b ]. Then for any c between f (a) and f (b), there exists x ∈ [a, b ] such that f (x) = c.Proof. Any proof of this theorem tends to be a bit unsatisfying, because it ultimately boils down to one’s definition of the real numbers. For example, one consequence of the standard definition is that every set of real numbers which is bounded above has a least upper bound. In particular, if f (a) ≤ c and f (b) ≥ c, then the set of x ∈ [a, b ] such that f (x) < c has a least upper bound y. By continuity, f (x) ≤ c; on the other hand, if f (x) > c , then f (x − t) > 0for all t less than some positive , again by continuity, in which case x −  is an upper bound smaller than x, a contradiction. Hence f (x) = c. Another important property of continuous functions is the “extreme value theorem”. Theorem 24. A continuous function on a closed interval has a global maximum and mini-mum. Proof. 27 This statement is of course false for an open or infinite interval; the function may go to infinity at one end, or may approach an extreme value without achieving it. (In technical terms, a closed interval is compact while an open interval is not.) Problems for Section 4.1 Prove that f (x) = x is continuous. 2. Prove that the product of two continuous functions is continuous, and that the recip-rocal of a continuous function is continuous at all points where the original function is nonzero. Deduce that all polynomials and rational functions are continuous. 3. (Product rule) Prove that ( f g )′ = f g ′ + f ′g and find a similar formula for ( f /g )′. (No, it’s not f ′/g − f /g ′. Try again.) 4. (Chain rule) If h(x) = f (g(x)), prove that h′(x) = f ′(g(x)) g′(x). 5. Show that the derivative of xn is nx n−1 for n ∈ Z.6. Prove that sin x < x < tan x for 0 < x < π/ 2. (Hint: use a geometric interpretation, and remember that x represents an angle in radians!) Conclude that lim x→0 sin x/x =1. 7. Show that the derivative of sin x is cos x and that of cos x is ( − sin x). While you’re at it, compute the derivatives of the other trigonometric functions. 8. Show that an increasing function on a closed interval satisfying the intermediate value theorem must be continuous. (Of course, this can fail if the function is not increasing!) In particular, the functions cx (where c > 0 is a constant) and log x are continuous. 9. For c > 0 a constant, the function f (x) = ( cx − 1) /x is continuous for x 6 = 0 by the previous exercise. Prove that its limit at 0 exists. (Hint: if c > 1, show that f is increasing for x 6 = 0; it suffices to prove this for rational x by continuity.) 10. Prove that the derivative of cx equals kc x for some constant k. (Use the previous exercise.) Also show (using the chain rule) that k is proportional to the logarithm of c. In fact, the base e of the natural logarithms is defined by the property that k = 1 when c = e.11. Use the chain rule and the previous exercise to prove that the derivative of log x is 1 /x .(Namely, take the derivative of elog x = x.) 12. (Generalized product rule) Let f (y, z ) be a function of two variables, and let g(x) = f (x, x ). Prove that g′(x) can be written as the sum of the derivative of f as a function of y alone (treating z as a constant function) and the derivative of f as a function of z alone, both evaluated at y = z = x. For example, the derivative of xx is xx log x + xx.28 4.2 Extrema and the first derivative The derivative can be used to detect local extrema. A point t is a local maximum (resp. minimum) for a function f if f (t) ≥ f (x) (resp. f (t) ≤ f (x)) for all x in some open interval containing t. Theorem 25. If t is a local extremum for f and f is differentiable at t, then f ′(t) = 0 . Corollary 26 (Rolle). If f is differentiable on the interval [a, b ] and f (a) = f (b) = 0 , then there exists x ∈ [a, b ] such that f ′(x) = 0 . So for example, to find the extrema of a continuous function on a closed interval, it suffices to evaluate it at • all points where the derivative vanishes, • all points where the derivative is not defined, and • the endpoints of the interval, since we know the function has global minima and maxima, and each of these must occur at one of the aforementioned points. If the interval is open or infinite at either end, one must also check the limiting behavior of the function there. A special case of what we have just said is independently useful: if a function is positive at the left end of an interval and has nonnegative derivative on the interval, it is positive on the entire interval. 4.3 Convexity and the second derivative As noted earlier, a twice differentiable function is convex if and only if its second derivative is nonnegative. 4.4 More than one variable Warning! The remainder of this chapter is somewhat rougher going than what came before, in part because we need to start using the language and ideas of linear algebra. Rest assured that none of the following material is referenced anywhere else in the notes. We have a pretty good understanding now of the relationship between extremization and the derivative, for functions in a single variable. However, most inequalities in nature deal with functions in more than one variable, so it behooves us to extend the formalism of calculus to this setting. Note that whatever we allow the domain of a function to be, the range of our “functions” will always be the real numbers. (We have no need to develop the whole of multivariable calculus when all of our functions arise from extremization questions!) 29 The formalism becomes easier to set up in the language of linear algebra. If f is a function from a vector space to R defined in a neighborhood of (i.e. a ball around) a point ~x, then we say the limit of f at ~x exists and equals L if for every  > 0, there exists δ > 0such that 0 < || ~t|| < δ =⇒ | f (~x + ~t) − L| < . (The double bars mean length in the Euclidean sense, but any reasonable measure of the length of a vector will give an equivalent criterion; for example, measuring a vector by the maximum absolute value of its components, or by the sum of the absolute values of its components.) We say f is continuous at ~x if lim ~t→0 f (~x + ~t) = f (~x). If ~y is any vector and ~x is in the domain of f , we say the directional derivative of f along ~x in the direction ~y exists and equals f~y(~x) if f~y(~x) = lim t→0 f (~x + t~ y) − f (~x) t . If f is written as a function of variables x1, . . . , x n, we call the directional derivative along the i-th standard basis vector the partial derivative of f with respect to i and denote it by ∂f ∂x i . In other words, the partial derivative is the derivative of f as a function of xi along, regarding the other variables as constants. TOTAL DERIVATIVE Caveat! Since the derivative is not a “function” in our restricted sense (it has takes values in a vector space, not R) we cannot take a “second derivative”—yet. ASSUMING the derivative exists, it can be computed by taking partial derivatives along a basis. 4.5 Convexity in several variables A function f defined on a convex subset of a vector space is said to be convex if for all ~x, ~ y in the domain and t ∈ [0 , 1], tf (~x) + (1 − t)f (~y) ≥ f (t~ x + (1 − t)~y). Equivalently, f is convex if its restriction to any line is convex. Of course, we say f is concave if −f is convex. The analogue of the second derivative test for convexity is the Hessian criterion. Asymmetric matrix M (that is, one with Mij = Mji for all i, j ) is said to be positive definite if M~ x·~x > 0 for all nonzero vectors ~x, or equivalently, if its eigenvalues are all real and positive. (The first condition implies the second because all eigenvalues of a symmetric matrix are real. The second implies the first because if M has positive eigenvalues, then it has a square root N which is also symmetric, and M~ x · ~x = ( N~ x) · (N~ x) > 0.) Theorem 27 (Hessian test). A twice differentiable function f (x1, . . . , x n) is convex in a region if and only if the Hessian matrix Hij = ∂2 ∂x i∂x j 30 is positive definite everywhere in the region. Note that the Hessian is symmetric because of the symmetry of mixed partials, so this statement makes sense. Proof. The function f is convex if and only if its restriction to each line is convex, and the second derivative along a line through ~x in the direction of ~y is (up to a scale factor) just H~ y · ~y evaluated at ~x. So f is convex if and only if H~ y · ~y > 0 for all nonzero ~y, that is, if H is positive definite. The bad news about this criterion is that determining whether a matrix is positive definite is not a priori an easy task: one cannot check M~ x · ~x ≥ 0 for every vector, so it seems one must compute all of the eigenvalues of M , which can be quite a headache. The good news is that there is a very nice criterion for positive definiteness of a symmetric matrix, due to Sylvester, that saves a lot of work. Theorem 28 (Sylvester’s criterion). An n × n symmetric matrix of real numbers is positive definite if and only if the determinant of the upper left k × k submatrix is positive for k = 1 , . . . , n .Proof. By the M~ x · ~x definition, the upper left k × k submatrix of a positive definite matrix is positive definite, and by the eigenvalue definition, a positive definite matrix has positive determinant. Hence Sylvester’s criterion is indeed necessary for positive definiteness. We show the criterion is also sufficient by induction on n. BLAH. Problems for Section 4.5 (IMO 1968/2) Prove that for all real numbers x1, x 2, y 1, y 2, z 1, z 2 with x1, x 2 > 0 and x1y1 > z 21 , x2y2 > z 2, the inequality 8 (x1 + x2)( y1 + y2) − (z1 + z2)2 ≤ 1 x1y1 − z21 1 x2y2 − z22 is satisfied, and determine when equality holds. (Yes, you really can apply the material of this section to the IMO! Verify convexity of the appropriate function using the Hessian and Sylvester’s criterion.) 4.6 Constrained extrema and Lagrange multipliers In the multivariable realm, a new phenomenon emerges that we did not have to consider in the one-dimensional case: sometimes we are asked to prove an inequality in the case where the variables satisfy some constraint. 31 The Lagrange multiplier criterion for an interior local extremum of the function f (x1, . . . , x n)under the constraint g(x1, . . . , x n) = c is the existence of λ such that ∂f ∂x i (x1, . . . , x n) = λ ∂g ∂x i (x1, . . . , x n). Putting these conditions together with the constraint on g, one may be able to solve and thus put restrictions on the locations of the extrema. (Notice that the duality of constrained optimization shows up in the symmetry between f and g in the criterion.) It is even more critical here than in the one-variable case that the Lagrange multiplier condition is a necessary one only for an interior extremum. Unless one can prove that the given function is convex, and thus that an interior extremum must be a global one, one must also check all boundary situations, which is far from easy to do when (as often happens) these extend to infinity in some direction. For a simple example, let f (x, y, z ) = ax + by + cz with a, b, c constants, not all zero, and consider the constraint g(x, y, z ) = 1, where g(x, y, z ) = x2 + y2 + z2. Then the Lagrange multiplier condition is that a = 2 λx, b = 2 λy, c = 2 λz. The only points satisfying this condition plus the original constraint are ± 1 √a2 + b2 + c2 (a, b, c ), and these are indeed the minimum and maximum for f under the constraint, as you may verify geometrically. 32 Chapter 5 Coda 5.1 Quick reference Here’s a handy reference guide to the techniques we’ve introduced. • Arithmetic-geometric-harmonic means • Arrange in order • Bernoulli • Bunching • Cauchy-Schwarz • Chebyshev • Convexity • Derivative test • Duality for constrained optimization • Equality conditions • Factoring • Geometric interpretations • Hessian test • H¨ older • Jensen 33 • Lagrange multipliers • Maclaurin • Minkowski • Newton • Power means • Rearrangement • Reduction of the number of variables (Theorem ?? ) • Schur • Smoothing • Substitution (algebraic or trigonometric) • Sum of squares • Symmetric sums • Unsmoothing (boundary extrema) 5.2 Additional problems Here is an additional collection of problems covering the entire range of techniques we have introduced, and one or two that you’ll have to discover for yourselves! Problems for Section 5.2 Let x, y, z > 0 with xyz = 1. Prove that x + y + z ≤ x2 + y2 + z2.2. The real numbers x1, x 2, . . . , x n belong to the interval [ −1, 1] and the sum of their cubes is zero. Prove that their sum does not exceed n/ 3. 3. (IMO 1972/2) Let x1, . . . , x 5 be positive reals such that (x2 i+1 − xi+3 xi+5 )( x2 i+2 − xi+3 xi+5 ) ≤ 0for i = 1 , . . . , 5, where xn+5 = xn for all n. Prove that x1 = · · · = x5.4. (USAMO 1979/3) Let x, y, z ≥ 0 with x + y + z = 1. Prove that x3 + y3 + z3 + 6 xyz ≥ 1 4 . 34 5. (Taiwan, 1995) Let P (x) = 1 + a1x + · · · + an−1xn−1 + xn be a polynomial with complex coefficients. Suppose the roots of P (x) are α1, α 2, . . . , α n with |α1| > 1, |α2| > 1, . . . , |αj | > 1and |αj+1 | ≤ 1, |αj+2 | ≤ 1, . . . , |αn| ≤ 1. Prove that j ∏ i=1 |αi| ≤ √|a0|2 + |a1|2 + · · · + |an|2. (Hint: look at the coefficients of P (x)P (x), the latter being 1+ a1x+· · · +an−1xn−1+xn.) 6. Prove that, for any real numbers x, y, z ,3( x2 − x + 1)( y2 − y + 1)( z2 − z + 1) ≥ (xyz )2 + xyz + 1 . (a) Prove that any polynomial P (x) such that P (x) ≥ 0 for all real x can be written as the sum of the squares of two polynomials. (b) Prove that the polynomial x2(x2 − y2)( x2 − 1) + y2(y2 − 1)( y2 − x2) + (1 − x2)(1 − y2)is everywhere nonnegative, but cannot be written as the sum of squares of any number of polynomials. (One of Hilbert’s problems, solved by Artin and Schreier, was to prove that such a polynomial can always be written as the sum of squares of rational functions.) 35 Bibliography P.S. Bullen, D. Mitrinovi´ c, and P.M.Vasi´ c, Means and their Inequalities , Reidel, Dor-drecht, 1988. G. Hardy, J.E. Littlewood, and G. P´ olya, Inequalities (second edition), Cambridge University Press, Cambridge, 1951. T. Popoviciu, Asupra mediilor aritmetice si medie geometrice (Romanian), Gaz. Mat. Bucharest 40 (1934), 155-160. S. Rabinowitz, Index to Mathematical Problems 1980-1984 , Mathpro Press, Westford (Massachusetts), 1992. 36
189344
https://www.chilimath.com/lessons/advanced-algebra/solving-exponential-equations-using-logarithms/
Skip to content Solving Exponential Equations with Logs How to Solve Exponential Equations using Logarithms In our previous lesson, you learned how to solve exponential equations without logarithms. This time around, we want to solve exponential equations requiring the use of logarithms. Why? The reason is that we can’t manipulate the exponential equation to have the same or common base on both sides of the equation. If you encounter such type of problem, the following are the suggested steps: Steps to Solve Exponential Equations using Logarithms 1) Keep the exponential expression by itself on one side of the equation. 2) Get the logarithms of both sides of the equation. You can use any bases for logs. 3) Solve for the variable. Keep the answer exact or give decimal approximations. In addition to the steps above, make sure that you review the Basic Logarithm Rules because you will use them in one way or another. Let’s go over some examples! Examples of How to Solve Exponential Equations using Logarithms Example 1: Solve the exponential equation [latex]{5^{2x}} = 21[/latex]. The good thing about this equation is that the exponential expression is already isolated on the left side. We can now take the logarithms of both sides of the equation. It doesn’t matter what base of the logarithm to use. The final answer should come out the same. The best choice for the base of log operation is [latex]5[/latex] since it is the base of the exponential expression itself. However, we will also use in the calculation the common base of [latex]10[/latex], and the natural base of [latex]\color{red}e[/latex] (denoted by [latex]\color{blue}ln[/latex]) just to show that in the end, they all have the same answers. Log Base of [latex]5[/latex] Log Base of [latex]10[/latex] Log Base of [latex]e[/latex] Example 2: Solve the exponential equation [latex]2\left( {{3^{x – 5}}} \right) = 12[/latex] . As you can see, the exponential expression on the left is not by itself. We must eliminate the number [latex]2[/latex] that is multiplying the exponential expression. To do that, divide both sides by [latex]2[/latex]. That would leave us just the exponential expression on the left, and [latex]6[/latex] on the right after simplification. It’s time to take the log of both sides. Since the exponential expression has base [latex]3[/latex], that’s the convenient base to use for log operation. In addition, we will also solve this using the natural base [latex]e[/latex] just to compare if our final results agree. Log Base of [latex]3[/latex] Log Base of [latex]e[/latex] Example 3: Solve the exponential equation [latex]2\left({\Large{{{{{e^{4x – 3}}} \over {{e^{x – 2}}}}}}} \right) – 7 = 13[/latex] . This looks like a mess at first. However, if you know how to start this out, the solution to this problem becomes a breeze. What we should do first is to simplify the expression inside the parenthesis. Use the Division Rule of Exponent by copying the common base of [latex]e[/latex] and subtracting the top by the bottom exponent. Now isolate the exponential expression by adding both sides by [latex]7[/latex], followed by dividing the entire equation by [latex]2[/latex]. Take the logarithm of both sides. Use [latex]\color{red}ln[/latex] because we have a base of [latex]e[/latex]. Then solve for the variable [latex]x[/latex]. Example 4: Solve the exponential equation [latex]{1 \over 2}{\left( {{{10}^{x – 1}}} \right)^x} + 3 = 53[/latex] . Observe that the exponential expression is being raised to [latex]x[/latex]. Simplify this by applying the Power to a Power Rule. Do that by copying the base [latex]10[/latex] and multiplying its exponent to the outer exponent. It should look like this after doing so. We can now isolate the exponential expression by subtracting both sides by [latex]3[/latex] and then multiplying both sides by [latex]2[/latex]. Take the logarithm of both sides with base [latex]10[/latex]. If you just see a [latex]\color{red}log[/latex] without any specific base, it is understood to have [latex]10[/latex] as its base. We are going to solve this quadratic equation by factoring method. Let’s move everything to the left side, therefore making the right side equal to zero. Factor out the trinomial into two binomials. Set each binomial factor equal zero then solve for [latex]x[/latex]. Example 5: Solve the exponential equation [latex]{e^{2x}} – 7{e^x} + 10 = 0[/latex]. We will need a different strategy to solve this exponential equation. Observe that we can actually convert this into a factorable trinomial. First, we let [latex]m = {e^x}[/latex]. Rewrite the exponential expression using this substitution. Factor out the trinomial as a product of two binomials. Then replace [latex]m[/latex] by [latex]e^x[/latex] again. Finally, set each factor equal to zero and solve for [latex]x[/latex], as usual, using logarithms. You might also like these tutorials: Solving Exponential Equations without Logarithms Tags: Advanced Algebra, Lessons
189345
https://www.youtube.com/watch?v=2Fbz9GoHwaE
Khan Academy Tutorial: trig values of π/6, π/4, and π/3 West Explains Best 4960 subscribers 17 likes Description 1352 views Posted: 10 Jan 2023 maths #khanacademy #unitcircle #precalculus #trigonometry 0:00 Intro to Unit Circle 7:06 Khan Academy Questions 11:36 Like and Subscribe! Please follow me on Instagram! @westexplainsbest Link: Or my Facebook Page! Link: I hope you enjoyed the video! Please leave a comment if you'd like to see a topic covered or have any mathematics related question. Be sure to search for any other concept you need and check out some of my non-math videos in the special features playlist. 9 comments Transcript: Intro to Unit Circle Hi everyone, this is Mr. West and today we're doing a Khan Academy tutorial on trig values of pi over 6, pi over4 and pi over3. This was a special request from the chhatta family. If you have a request yourself, leave a comment and let me know what exercise or worksheet you need done and I'll go ahead and make a video for you. So in this Khan Academy exercise, we are talking about radians. So you can see at the top values of pi over 6, pi over 4, and pi over 3. So I'm actually going to flip over here to the unit circle and show you what is it talking about. So normally when you're talking about circles, you think in degrees. So we have 0 degrees, we have 90°, 180, and it goes around and there's all different kinds of measures. I'm particularly looking at 0, 30, 45, 60, and 90. Now a radian equivalent is p / 6, p over 4, p over 3, p over2 and so on. Now what is a radian? A radian is a measure of a circle on the outside. So let me go ahead and draw a quick picture. So if I have a circle here, a radian is the measure. So that okay, if we draw a radius here and we draw a radius here and then this segment right here, if this is the distance also of the radius, that is what we would call that measure is what we'd call a radian. Okay, so that angle measure would be a radian for the value of radius radius and then on the outside of the circle radius. And it turns out that there's two pi radians in a circle and that's actually the circumference. Okay, so that's quick background on the uh radians. Now when we're looking at these values, we see we got the unit circle and we actually see the s and cosine values posted here. So this is the sign value and this is the cosine value. So we got s and cosine um of these different angle measures. So I'm going to talk about why these angle measures equal uh for these different radians the s and cosine equal 1/2 3 over2 and so on. Okay. So let's go down here. These are the reference triangles. So the main two reference triangles you're going to be using in Khan Academy and in trigonometry are the 3090 and the 454590. Let's go ahead and start with the 3090. So the reason why this distance is one right here is because this is talking about the radius of the circle. That is the Whoops. That is the radius of the unit circle. Okay, so that's why that's a one. Now, when we're talking about trig values, we're talking about s, cosine, and tangent. And I'm going to first start with 30°. So, I'm going to concentrate on 30° and then we'll move on to 60. So, if we're talking about the sign of 30°, we're talking about the opposite side over the hypotenuse. Well, the opposite side is 1/2 and the hypotenuse is 1. So 1/2 to 1 is going to be our sign of 30°. However, if we just divide by one, technically the value doesn't change. So it stays as 1 /2. So the sign of 30° is 1 over2. I'm actually going to change to blue just so it matches a little bit better. So we have 1 over two for the s of 30°. Now, what about the cosine of 30 degrees? Well, a little bit different. Now we're talking about the adjacent side, which is this side right here. That's the adjacent side. And that in this case, it's of 3 over2. And that's going to be over one also. Whoops, I don't know where I wrote a two. So, we have three uh over two. Sorry, I forgot to write over two over one. But again, we found out before if we divide by one, the value just stays the same. So, square of 3 over two divid 1 is just 3 over two. Now we're finally on a tangent. This is actually the trickiest one. Um because we have the opposite side over the adjacent. The opposite side being 1/2. I'm actually going to use some space down here. And then this is going to be divided by 3 over2. So we have a complex fraction here. And in order to deal with that, we're going to multiply by the reciprocal like this. And we see that we can cancel out the twos. We're left with 1 over 3. You almost never see it written that way though. Instead, it's simplified getting the radical out of the denominator and you get square roo of three over three. So, that's going to be what our tangent is for 30 degrees. Okay, I don't want to emphasize on this process right here. I have other videos for that in case you're interested. Let's move on to 60°. So, now that we're done with 30, let me make this a little bit smaller. We're done with 30. Now, we're going to move on to 60°. So, with 60°, it's very similar to these 30° values. So, let's start off with the sign. And the sign we're going to have the opposite over the hypotenuse. In this case it's 3 over2 or sorry3 over2 / 1. Okay. So in that case when we divide by 1. It's just remains the same just like that. 3 over2 sin 60. Now when we do the s uh the cosine it's going to be adjacent over the hypotenuse. Again the hypotenuse is one. When you divide by one it just stays the same. So we get 1 over2. You'll notice there's a connection here with s and cosine. It's because they're sharing those sides. Okay? So the opposite side for 60 is the adjacent side for 30 and vice versa. So that's why they have that little connection. The only thing that differs between s uh 30° and 60° is the tangent. So that's our last one. We are going to have uh the opposite over the adjacent side for tangent. So my opposite side is 3 over2. I need to go down here for this. Just the exact opposite situation that we had right here. here. So of 3 over2 / 12. In order to handle this complex fraction, I need to multiply by the reciprocal instead. And you'll see that this one is a little bit more simple. Square of 3 over 1 or just square root of 3. You never really see it written over one. So I'm going to get rid of that fraction. And we just have three. Okay. Moving on to 45°. We're going to do this and then we get right into the Khan Academy exercises. So this is just background information. So again, we have the radius of the unit circle. Okay, that's what this is. That's why it's a one right there. And then we have these different sides. So we have two over2 and two over2. And that's because with the 45 4590, it's an isosesles triangle. There's two equal sides. And that's why they're equal here. So we're talking about the sign. S is the uh opposite over the uh sorry over the hypotenuse. And honestly, it doesn't matter. This is a 45 degree angle, but both of these are 45 degree angles. So we can just stick with one and then use the same values for the other 45 degree angle. So if we have two over two which is our opposite side of two over two over a hypotenuse. Our hypotenuse again is one. So again it doesn't change the value. So we're going to leave it as 2 over two. Cosine is going to be the same thing because the co the adjacent side is the same as the opposite side. And then if we have the tangent, this is actually the easiest of the tangents. We're going to have two over2. That's our opposite side over adjacent side, which is square two over two. And if we divide something by Khan Academy Questions itself, okay? Well, we can I can show you another way. We just multiply by the reciprocal. And you'll see that all these values cancel out. We're just left with one. So, the tangent of 45° is just one. Okay. So, we're done with all our trig values. Now, we can jump into the Khan Academy. If you need a refresher, just rewind. Uh I'll probably label these with chapter headings in case you're interested. All right. Okay, so we're back into our trig values for p over 6, p over4, and p over 3 exercise in Khan Academy. We're talking about the cosine of pi over4. One of the first things that you need to do with uh this exercise is understand what angle are we talking about? I just talked about uh 45° and 60° and 30°. And now we're switching back to radians. So again, pi over4, we're talking about 45 degrees. Pi over 6, we're talking about 30°. And then pi over3, we're talking about 60°. So that's something to keep in mind as we do this. So we recognize if we're talking about pi over4, we are talking about 45°. So the cosine of 45°, as we saw, is the s the same as the sign of 45 degrees. We're going to put in a fraction here. Let me get this going. So, we're going to put in a fraction. I'm not sure why it's not popping up. Okay, let's hide scratch pad. There we go. Hopefully that works. H, that is so odd. It's not It's not letting me type. And we get square root of two over two. And that should be good. We're on to our next question. So this one, tangent of p / 6. Again, we need to figure out that p / 6 is equal to 30°. And if we're talking about tangent, we could just reference this little chart that we have here. We already figured out our tangent. It's of 3 over 3. And that's what we're going to type in our answer for p / 6. So I'm going to go ahead type in square of 3 over3. And there we go. Now, this whole triangle up top, you really don't need to know anything about it. The four, the pi over 6, well, he already tells you about the pi over 6, but the the j k and l, that's kind of irrelevant to this exercise. We just need to know about um the angle measure in radians. So, point A is on the following unit circle. What are the coordinates of point A? So I kind of mentioned it in the very beginning but anytime we have a coordinate here. Okay, this is composed of our adjacent side and our opposite side. The opposite side as you can tell is going up. So that's going to be our ycoordinate and then our adjacent side, okay, that's going to the right as you can see. Okay, following that path and that's going to be our x coordinate. So in other words, remember how we did the sign and that was over one and the cosine we divided by one because of the unit circle. Okay? Well, that means that our x value is going to be the cosine of the angle and our y-coordinate is going to be the sign of the angle. And I showed you that right in the from the very beginning right here with these different values. I said the cosine was going to be that x value and the sign was going to be that y value. So what we're going to do here is we're going to look at pi over 3. And we know that pi over 3 is equal to 60°. And we're just going to find the cosine and then the sign. So we can just go back to our chart here. If you ever need to, you can always just redraw the triangle and figure it out. But here we have our cosine. Uh our cosine is going to be 1/2. So that's going to be our x coordinate for this one. So 12 one over two. If I can grab it. There we go. One over two. And then our cosine, I think it's square of 3 over2. Yes, it is. So we get square of 3. I'm going to put my fraction first. Square of 3 over two. We're going to check it. And there we go. One more to go. All right. Now we're on to sign of pi / 6. This one's my favorite actually because it is simply 1/2. Okay. Again, pi over 6 is 30° reference angle. And then we have a sign opposite over hypotenuse, which means just over one. And so we have 12 for and Like and Subscribe! that's it. So I hope you hope this was helpful. Please leave a comment if you need any of these other problems covered. You can even describe the problem in the comment and I should be able to answer it. Uh if you have any other requests, make a request known in the comment section. I look forward to seeing you next time right here on West Explains Best.
189346
https://core.ac.uk/download/pdf/288292294.pdf
Metal occlusal surface dentures: A Case Report Devendra Chopra,1 Naorem Satish Kumar Singh,2 Deepak Sharma,3 Parag Nehete4 Shailendra Kumar Singh5 Introduction Occlusal surfaces of posterior teeth of complete dentures that are harmonious with mandibular movement contribute to masticatory efficiency and stability of dentures. However, the rate of wear is dependent hugely on the dietary component of the patient. It has been observed that the rate of wear has been quite high in patients who predominantly have a non vegetarian diet or some bad habits like chewing tobacco. Historically, cast gold occlusal surfaces for complete or removable partial dentures have been fabricated with retentive loops or beads and then luted to prepared denture teeth.1 Methods have been developed where acrylic resin is used first for the occlusal surfaces of posterior teeth, occlusal adjustments are carried out in the patient’s mouth, and acrylic resin is then replaced by metal. However, conventional procedures make it difficult to convert occlusal surfaces of posterior artificial teeth to metal occlusal surfaces. One method includes removing occlusal surfaces of arranged resin teeth and making a wax pattern and a casting. Another method uses metal occlusal surfaces fabricated by making a core of adjusted resin occlusal surfaces. These methods, however, are relatively complicated, and it is often difficult to accurately reproduce the occlusion established in acrylic resin teeth.2 – 7 Clinical Report: A 68 year old male patient reported with a chief complaint of worn out dentures. On taking the detailed history it was revealed that the patient had come for a third pair of complete denture in as many years. Furthermore, it was observed that the diet of the patient was purely non-vegetarian & habit of tobacco chewing reported. Pigmentation of the maxillary & mandibular ridge is also reported (Fig 1). Examination of the dentures depicted that only the occlusal surfaces had severe wear pattern as opposed to the denture base which was normal(Fig 2). Clinical evaluation revealed dentures with good retention and stability but there was an associated loss of vertical dimension. It was diagnosed as a case of excessively worn out dentures because of dietary habits. The treatment plan suggested to the patient was that of dentures with metallic occlusal surfaces considering his economic status and non compliance regarding the dietary habits. Procedures: IJCDS • MARCH, 2012 • 3(1) © 2012 Int. Journal of Clinical Dental Science ABOUT THE AUTHORS 1.Dr. Devendra Chopra Senior Lecturer (Dept. of Prosthodontics) Institute of Dental Sciences, Bareilly 2.Dr. Naorem Satish Kumar Singh Asst. Prof. (Dept. of Dentistry) Jawaharlal Nehru Institute of Medical Scienes, Imphal 3.Dr. Deepak Sharma Asst. Prof. (Dept. of Conservative Dentistry and Endodontics) GGS Dental College, Burhanpur 4.Dr. Parag Nehete Senior Lecturer (Dept. of Conservative dentistry and Endodontics) S.M.B.T Dental College, Sangamner 5.Dr. Shailendra Kumar Singh Asst. Prof. (Dept. of Orthodontics) Azamghar Dental College, Azamghar Corresponding Author: Dr. Devendra Chopra, Faculty Residence No. 10, Institute of Dental Sciences Bareilly(Uttar Pradesh) Pin - 243001, India Tel:- +91-9839132770, +91-8881544347 Email:- cyclone.chopra@gmail.com Abstract The wearing out of the occlusal surfaces of the acrylic teeth of a complete denture with its use over a period of time is a common phenomenon. Other factors which lead to the wearing of denture teeth are some unconditional forces & eating habits of the edentulous patients. Following a successful insertion of the complete denture with acrylic teeth the dentures were taken back from the patient & the occlusal surfaces have been modified with cast metal using the lost wax technique. The final restorations were cemented on the prepared plastic teeth. This article describes a simplified method for making cast metal occlusal surfaces & to improve the retention of the same with the underlying plastic structure. KEYWORDS: Metal occlusal surface, customized occlusal surface dentures. 11 CASE REPORT brought to you by CORE View metadata, citation and similar papers at core.ac.uk provided by eDENT Journals IJCDS • MARCH, 2012 • 3(1) © 2012 Int. Journal of Clinical Dental Science Fig 1. Maxillary & Mandibular ridge Fig 2. Previous sets of dentures Fig 3. New Dentures Articulated Fig 4. Duplicated Dentures Fig 5. Prepared occlusal surfaces of Max. denture Fig 6. Wax patterns 12 IJCDS • MARCH, 2012 • 3(1) © 2012 Int. Journal of Clinical Dental Science The treatment plan suggested to the patient was that of dentures with metallic occlusal surfaces considering his economic status and non compliance regarding the dietary habits. Procedures: The complete denture was fabricated as per the routine procedure and was inserted. The patient was advised to use the denture for a couple of days as the metal occlusal surfaces are made after all adjustments are completed and occlusion is harmonious. After the patient got adjusted to the new set of dentures, the laboratory procedure was initiated & the set of duplicated dentures were given to the patient. (Fig 3 & 4) 1. A new cast was poured with the recently made dentures. 2. The maxillary & mandibular complete denture (along with the centric records) mounted on the semiadjustable articulator. Fig 7. Cementation of castings Fig 8. Occlusal adjustment Fig 9. Wax patterns Fig 10. Prepared occlusal surface of mand. Denture with cemented castings. Fig 11. Dentures inserted Fig 12. Post Operative view 13 IJCDS • MARCH, 2012 • 3(1) © 2012 Int. Journal of Clinical Dental Science 3. Firstly the occlusal surfaces of all the posterior teeth of the maxillary denture teeth have been prepared, wax patterns fabricated & castings being done.(Fig 5 & 6 ) 4. After finishing & polishing the castings were cemented with resin modified glass ionomer cements. After this the occlusal adjustment were made with the mandibular denture. (Fig 7 & 8 ) 5. The posterior teeth of the mandibular denture are also prepared in the same manner & the castings were cemented to the denture with the help of resin cement. (Fig 9 & 10) 6. Occlusal adjustment has been done followed by the finishing & polishing of the dentures & the dentures were inserted. (Fig 11 & 12) Discussion : The technique described in this article is unique in many ways. Unlike other techniques no stone indexes were used to fabricate the wax patterns as one by one the occlusal surfaces of acrylic teeth were prepared and the wax patterns were adjusted to their opposing occlusal surfaces. As a result less occlusal adjustments were required in the final restorations. Additionally, individual units, not multiple units, were fabricated. Improved access to casting margins was provided so that the final finishing & polishing can be completed to near perfection. Moreover, while preparing the acrylic occlusal surface a small post space is also prepared to enhance the retention of the cemented casting. An esthetic concern for all techniques that advocate the use of metal occlusal surfaces is the display of metal while smiling and speaking.8 Frosting the metal with a light spray of a sand blaster can reduce the amount of light reflected , thereby making the metal less noticeable. Fabrication of metal occlusal surface for denture teeth is not a widespread practice considering that it has rare indications as depicted in this case. The need for such kind of modification is quite subjective and has its own sets of implications. Not only the forces of occlusion transferred to the bone, but also they cause the wearing of the denture teeth thus maintaining equilibrium.9 However in case of metal occlusion, bone would bear the brunt of forces & eventually resorb at a faster rate. Furthermore, correction of occlusion in such cases may be very tedious if the precautions are not carried out at the time of cementation of castings. Summary & Conclusion: The use of this combination is also effective with various ridge relationships & there is no specific occlusal scheme that needs to be followed for the same. Estabilishing metal occlusal surfaces for dentures remains to be very subjective and needs a judicious treatment planning. References : 1. Lloyd PM. Laboratory fabrication of gold occlusal surfaces for removable & implant supported prostheses”. J Prosthodont 2003 Mar ; 12(1):8-12. 2. Wallace DH. The use of gold occlusal surfaces in complete & partial dentures. J Prosthet Dent 1964; 14: 326 – 33. 3. Koehne CL, Morrow RM. Construction of denture teeth with gold occlusal surfaces. J Prosthet Dent 1970; 23: 449 – 55. 4. Woodward JD,Gattozzi JG. Simplified gold occlusal technique for removable restoration. J Prosthet Dent 1972; 27: 447 – 50. 5. Elkins WE. Gold occlusal surfaces & organic occlusion in denture construction. J Prosthet Dent 1973; 30 : 94 – 8. 6. Rhoads JE, Rudd KD, Morrow RM. Dental laboratory procedures complete dentures.Vol 1, 2nd ed. St. Louis: CV Mosby;1986. 7. Hansen CA,Clark, Wright P. Simplified procedure for making gold occlusal surfaces on denture teeth. J Prosthet Dent 1994; 71: 413 – 6. 8. Schneider RL. Custom metal occlusal surfaces for acrylic resin denture teeth. J Prosthet Dent 1981; 46: 98 – 101. 9. Sheldon Winkler. Essentials of complete denture prosthodontics. 2nd edition. 14
189347
https://www.quora.com/Linear-Algebra-How-does-one-solve-for-a-matrix-where-the-sum-of-the-rows-is-known-and-the-sum-of-the-columns-is-known
Something went wrong. Wait a moment and try again. Linear Algebra Rows & Columns Matrix Construction Matrices in Maths Matrix Theory Matrix Computations Row Sum Matrix (problem) 5 Linear Algebra: How does one solve for a matrix where the sum of the rows is known and the sum of the columns is known? Bhargava Chintalapati arm chair explorer · Updated 12y This system has n2 variables and only 2n equations. So will have infinitely many solutions. In Linear Algebraic terms, taking the example of 3D matrix, the summation action is done by . That is ⎡⎢ ⎢ ⎢ ⎢ ⎢ ⎢⎣123⎤⎥ ⎥ ⎥ ⎥ ⎥ ⎥⎦ = [1+2+3] ⎡⎢ ⎢ ⎢ ⎢ ⎢ ⎢⎣123345678⎤⎥ ⎥ ⎥ ⎥ ⎥ ⎥⎦ = So effectively all that you need to do is, given (Y1), you need to find This system has n2 variables and only 2n equations. So will have infinitely many solutions. In Linear Algebraic terms, taking the example of 3D matrix, the summation action is done by . That is ⎡⎢ ⎢ ⎢ ⎢ ⎢ ⎢⎣123⎤⎥ ⎥ ⎥ ⎥ ⎥ ⎥⎦ = [1+2+3] Similarly, ⎡⎢ ⎢ ⎢ ⎢ ⎢ ⎢⎣123345678⎤⎥ ⎥ ⎥ ⎥ ⎥ ⎥⎦ = So effectively all that you need to do is, given (Y1), you need to find ⎡⎢ ⎢ ⎢ ⎢ ⎢ ⎢⎣X1X2X3⎤⎥ ⎥ ⎥ ⎥ ⎥ ⎥⎦ such that ⎡⎢ ⎢ ⎢ ⎢ ⎢ ⎢⎣X1X2X3⎤⎥ ⎥ ⎥ ⎥ ⎥ ⎥⎦ = [Y1] Since is of rank 1 it has a 2D Null space and a 1D Row space. The Null space of will be X1+X2+X3=0. This is a plane ( 2D ) hence has 2 basis vectors. The following would be the vector representation of the plane ⎡⎢ ⎢ ⎢ ⎢ ⎢ ⎢⎣X1X2X3⎤⎥ ⎥ ⎥ ⎥ ⎥ ⎥⎦ = X2⎡⎢ ⎢ ⎢ ⎢ ⎢ ⎢⎣−110⎤⎥ ⎥ ⎥ ⎥ ⎥ ⎥⎦ +X3⎡⎢ ⎢ ⎢ ⎢ ⎢ ⎢⎣−101⎤⎥ ⎥ ⎥ ⎥ ⎥ ⎥⎦ Any vector formed by using any X2 and X3 will be in the null space of the summation matrix. Which would mean the vector would always add to 0. So needless to say adding the above vector to any particular solution, will result in another solution. For example, ⎡⎢ ⎢ ⎢ ⎢ ⎢ ⎢⎣X1X2X3⎤⎥ ⎥ ⎥ ⎥ ⎥ ⎥⎦ = The solution would be ⎡⎢ ⎢ ⎢ ⎢ ⎢ ⎢⎣X1X2X3⎤⎥ ⎥ ⎥ ⎥ ⎥ ⎥⎦ = ⎡⎢ ⎢ ⎢ ⎢ ⎢ ⎢⎣123⎤⎥ ⎥ ⎥ ⎥ ⎥ ⎥⎦+a⎡⎢ ⎢ ⎢ ⎢ ⎢ ⎢⎣−110⎤⎥ ⎥ ⎥ ⎥ ⎥ ⎥⎦ +b⎡⎢ ⎢ ⎢ ⎢ ⎢ ⎢⎣−101⎤⎥ ⎥ ⎥ ⎥ ⎥ ⎥⎦ for all a and b. Hence infinitely many solutions. Sridhar Ramesh Mathematician/Logician/All-Around Great Guy · Author has 954 answers and 6.6M answer views · 11y Originally Answered: How does one solve for a matrix where the sum of the rows is known and the sum of the columns is known? · It must be the case that the sum of the row-sums equals the sum of the column-sums (since both are the sum of all the matrix entries). In that case, to get one matrix with appropriate row-sums and column-sums, make a matrix which is all 0s except for the top row and left column. Along the top row (apart from the top-left), place the appropriate column-sums. Along the left column (apart from the top-left), place the appropriate row-sums. Finally, in the top-left, place the sum of the top row + the sum of the left column - the sum of all the matrix entries (calculable as noted above). This produce It must be the case that the sum of the row-sums equals the sum of the column-sums (since both are the sum of all the matrix entries). In that case, to get one matrix with appropriate row-sums and column-sums, make a matrix which is all 0s except for the top row and left column. Along the top row (apart from the top-left), place the appropriate column-sums. Along the left column (apart from the top-left), place the appropriate row-sums. Finally, in the top-left, place the sum of the top row + the sum of the left column - the sum of all the matrix entries (calculable as noted above). This produces one solution, but there will be other solutions as well, so long as there are at least 2 rows and 2 columns (consider the matrix ). The full space of solutions will be this solution plus any matrix whose rows and columns all sum to zero. (The general form of a matrix whose rows and columns all sum to zero is as follows: pick any values you like outside the top and left, then put their sum on the top-left, then fill out the rest of the top and left as necessary to make columns and rows sum to zero. Along with the above, this shows that, in general, when trying to get particular row-sums and column-sums, if it can be done at all [i.e., if the sum of the row-sums = the sum of the column-sums], then there is a unique way to do it extending any particular partially filled in matrix lacking only a single column and a single row.) Sean Owen Data Science Lead at Databricks (2018–present) · Author has 708 answers and 4.7M answer views · 12y Originally Answered: How does one solve for a matrix where the sum of the rows is known and the sum of the columns is known? · These don't nearly uniquely determine a matrix. For example the following distinct matrices have the same row and column sums: 1 3 2 4 3 1 0 6 Related questions Consider a 3 × 3 matrix whose sum of elements is the same when added horizontally, vertically and diagonally. How can I fill a matrix whose column sum, row sum, and diagonals sum are the same? How do I solve 6x6 matrix? How do I find Matrix P that diagonalizes A = ⎡ ⎢ ⎣ 0 0 1 0 0 1 0 0 1 ⎤ ⎥ ⎦ ? When we raise a matrix, whose rows or columns sum to 1, to the power of 2, 3, ..., n, why does the resultant matrix tend to stabilize after some n? Kiran Kumar I write Code · Author has 116 answers and 5M answer views · Updated 7y Related How can I fill a matrix whose column sum, row sum, and diagonals sum are the same? How about a python code which can auto-fill required matrix for you ! Is this what you want ? (made with Pencil tool) sum of each individual row is 4+3+8=15 ; 9+5+1=15; 2+7+6=15 sum of each individual column is 4+9+2=15; 3+5+7=15; 8+1+6=15 sum of each individual diagonal is 4+5+6=15; 8+5+2=15; These type of matrices popularly known as magic squares. I’ll tell you a simple way to fill the matrix. Assume every box has directions like Every time you have to place the new number to the south-east of previous number. Always place the first number at middle row last column Now 2 has to be placed to the south- How about a python code which can auto-fill required matrix for you ! Is this what you want ? (made with Pencil tool) sum of each individual row is 4+3+8=15 ; 9+5+1=15; 2+7+6=15 sum of each individual column is 4+9+2=15; 3+5+7=15; 8+1+6=15 sum of each individual diagonal is 4+5+6=15; 8+5+2=15; These type of matrices popularly known as magic squares. I’ll tell you a simple way to fill the matrix. Assume every box has directions like Every time you have to place the new number to the south-east of previous number. Always place the first number at middle row last column Now 2 has to be placed to the south-east of 1 But that box is out of the matrix so you have to place the number opposite to that box Now 3 has to be placed south-east to the 2 but it will be again out of the matrix so place number opposite to that box Now 4 has to be put to the south-east to 3 but it was already occupied by 1 so place 4 to the left side of 3 Now 5,6 can placed to south-east because you won’t get out of the box or already fill box Now 7 has to be placed south-east but it will out of matrix and opposite to the out of box will also be out of the box In this case you have to fill the box which is left to the previous number like this fill 8 & 9. Coding will be very easy once you get the concept. table[M][n]=1;for i in range(2,k+1): try: M=M+1 n=n+1; if(table[M][n]==0): table[M][n]=i else: M=M-1 n=n-2 table[M][n]=i; except: try: if(table[M]==0): n=0; table[M][n]=i else: table[M][n-1]=i except: try: M=0; table[M][n]=i; except: M=M-1 n=n-1; table[M][n-1]=i print(table) I’ve used try -exceptions because every time I’m trying to access the box which is out of the matrix will ultimately lead to Index out of bound error. I hope I’ve helped you. Here you go Kiran-G1/myscripts the code. Promoted by Grammarly Grammarly Great Writing, Simplified · Mon Which are the best AI tools for students? There are a lot of AI tools out there right now—so how do you know which ones are actually worth your time? Which tools are built for students and school—not just for clicks or content generation? And more importantly, which ones help you sharpen what you already know instead of just doing the work for you? That’s where Grammarly comes in. It’s an all-in-one writing surface designed specifically for students, with tools that help you brainstorm, write, revise, and grow your skills—without cutting corners. Here are five AI tools inside Grammarly’s document editor that are worth checking out: Do There are a lot of AI tools out there right now—so how do you know which ones are actually worth your time? Which tools are built for students and school—not just for clicks or content generation? And more importantly, which ones help you sharpen what you already know instead of just doing the work for you? That’s where Grammarly comes in. It’s an all-in-one writing surface designed specifically for students, with tools that help you brainstorm, write, revise, and grow your skills—without cutting corners. Here are five AI tools inside Grammarly’s document editor that are worth checking out: Docs – Your all-in-one writing surface Think of docs as your smart notebook meets your favorite editor. It’s a writing surface where you can brainstorm, draft, organize your thoughts, and edit—all in one place. It comes with a panel of smart tools to help you refine your work at every step of the writing process and even includes AI Chat to help you get started or unstuck. Expert Review – Your built-in subject expert Need to make sure your ideas land with credibility? Expert Review gives you tailored, discipline-aware feedback grounded in your field—whether you're writing about a specific topic, looking for historical context, or looking for some extra back-up on a point. It’s like having the leading expert on the topic read your paper before you submit it. AI Grader – Your predictive professor preview Curious what your instructor might think? Now, you can get a better idea before you hit send. AI Grader simulates feedback based on your rubric and course context, so you can get a realistic sense of how your paper measures up. It helps you catch weak points and revise with confidence before the official grade rolls in. Citation Finder – Your research sidekick Not sure if you’ve backed up your claims properly? Citation Finder scans your paper and identifies where you need sources—then suggests credible ones to help you tighten your argument. Think fact-checker and librarian rolled into one, working alongside your draft. Reader Reactions – Your clarity compass Writing well is one thing. Writing that resonates with the person reading it is another. Reader Reactions helps you predict how your audience (whether that’s your professor, a TA, recruiter, or classmate) will respond to your writing. With this tool, easily identify what’s clear, what might confuse your reader, and what’s most likely to be remembered. All five tools work together inside Grammarly’s document editor to help you grow your skills and get your writing across the finish line—whether you’re just starting out or fine-tuning your final draft. The best part? It’s built for school, and it’s ready when you are. Try these features and more for free at Grammarly.com and get started today! Assistant Bot · 1y To solve for a matrix where the sum of the rows and the sum of the columns are known, you can use a method often referred to as the matrix completion problem or transportation problem in optimization. Here’s a step-by-step approach to construct such a matrix: Problem Setup Let: - A be an m×n matrix. - ri be the known sum of the i-th row (for i=1,2,…,m). - cj be the known sum of the j-th column (for j=1,2,…,n). Conditions Feasibility Check: The total sum of the row sums must equal the total sum of the column sums: ∑mi=1ri=∑nj=1cj If this co To solve for a matrix where the sum of the rows and the sum of the columns are known, you can use a method often referred to as the matrix completion problem or transportation problem in optimization. Here’s a step-by-step approach to construct such a matrix: Problem Setup Let: - A be an m×n matrix. - ri be the known sum of the i-th row (for i=1,2,…,m). - cj be the known sum of the j-th column (for j=1,2,…,n). Conditions Feasibility Check: The total sum of the row sums must equal the total sum of the column sums: ∑mi=1ri=∑nj=1cj If this condition is not met, it is impossible to construct such a matrix. Steps to Construct the Matrix Initialize the Matrix: Start with an m×n matrix A filled with zeros. Greedy Algorithm: Iterate through the matrix and fill in values while respecting the row and column sums. For each element A[i][j]: Set . Update the row and column sums: [math]r_i = r_i - A[i][j][/math] [math]c_j = c_j - A[i][j][/math] Continue this process until all row sums and column sums are satisfied. Adjustments: If you reach a point where you cannot fill a cell without violating the row or column constraints: You may need to backtrack and adjust previous entries, or use linear programming methods to ensure all constraints are satisfied. Example Suppose you want to find a [math]2 \times 3[/math] matrix [math]A[/math] such that: Row sums: [math]r_1 = 3, r_2 = 4[/math] Column sums: [math]c_1 = 2, c_2 = 3, c_3 = 2[/math] Initialize: [math]A = \begin{bmatrix} 0 & 0 & 0 \ 0 & 0 & 0 \end{bmatrix}[/math] Fill the matrix: Start with [math]A[/math]: Set [math]A = \min(3, 2) = 2[/math]. Update sums: [math]r_1 = 1, c_1 = 0[/math]. Next, [math]A[/math]: Set [math]A = \min(1, 3) = 1[/math]. Update sums: [math]r_1 = 0, c_2 = 2[/math]. Move to [math]A[/math]: Set [math]A = \min(4, 2) = 2[/math]. Update sums: [math]r_2 = 2, c_2 = 0[/math]. Fill [math]A[/math]: Set [math]A = 2[/math] (to satisfy [math]r_2[/math]). Update sums: [math]r_2 = 0, c_3 = 0[/math]. Final matrix: [math]A = \begin{bmatrix} 2 & 1 & 0 \ 0 & 2 & 2 \end{bmatrix}[/math] This matrix satisfies both the row and column sum conditions. Conclusion This method provides a systematic way to construct a matrix given specific row and column sums. In more complex cases or larger matrices, optimization techniques such as linear programming might be necessary to find a feasible solution. Sean Owen Data Science Lead at Databricks (2018–present) · Upvoted by Anurag Bishnoi , Ph.D. Mathematics, Ghent University (2016) · Author has 708 answers and 4.7M answer views · 12y Related Linear Algebra: What makes the column space and row space of a matrix have the same dimension? First, a light-weight proof, in case that's intuitive enough: Let's say matrix A is m x n. A has n columns, each of which are m-dimensional vectors. Let's say the column space of A is c-dimensional. c may be less than m and n. There is a basis of c vectors (each m-dimensional) that spans the column space of A. So the columns of A can be written in terms of these c vectors. To express that, write the matrix B, containing those c vectors as columns. Then we'll have A = BC, where C's columns are the coordinates of columns of A in terms of this basis. This is the key point -- won't explain it here at First, a light-weight proof, in case that's intuitive enough: Let's say matrix A is m x n. A has n columns, each of which are m-dimensional vectors. Let's say the column space of A is c-dimensional. c may be less than m and n. There is a basis of c vectors (each m-dimensional) that spans the column space of A. So the columns of A can be written in terms of these c vectors. To express that, write the matrix B, containing those c vectors as columns. Then we'll have A = BC, where C's columns are the coordinates of columns of A in terms of this basis. This is the key point -- won't explain it here at length but it's important in what's next. (We don't care what C is for purposes here; it exists. Same for B.) Now turn back but along a different path. We could also view A = BC as a statement about the basis for A's rows. B's rows are coordinates for A's rows expressed in the basis of C's rows. C has c rows. A's row space is spanned by these c vectors. That doesn't quite mean the space is c-dimensional. It could be less; the rows of C may not be linearly independent. But at least we know that dim(row space of A) <= dim(col space of A) Now apply to same logic to the transpose of A. dim(row space of A') <= dim(col space of A') , so dim(col space of A) <= dim(row space of A) Since the two dimensions are <= each other, they must be equal. Now the most intuitive observation I know to make: B has c columns and C has c rows of course. This is the nature of the symmetry that leads in two directions to conclude the same thing about the row and column rank. B is coordinates in terms of C, C is coordinates in terms of B. This shared c-dimensional space gives rise to the row/column space both. Related questions How do you inverse one block of a matrix from the inverse of the overall matrix (linear algebra, matrices, numerical methods, matrix equations, numerical linear algebra, math)? How do you handle huge matrix multiplication (linear algebra, matrix calculus, matrix decomposition, sparse matrices, math)? What kind of electrical matrix meets the criteria of "a more complex and dynamic operation"? How do I properly diagonalize this matrix? How do you find a matrix for least squares with a projection matrix (linear algebra, numerical linear algebra, math)? Anurag Vamsi 6y Related How can I fill a matrix whose column sum, row sum, and diagonals sum are the same? By Using Magic Matrics Concept which gives the same sum The Creation Of Magic Square as By Using Magic Matrics Concept which gives the same sum The Creation Of Magic Square as Sponsored by Amazon Business Solutions and supplies to support learning. Save on essentials and reinvest in students and staff. Lors Soren Upvoted by Craig Miller , PhD Mathematics, The University of Connecticut and Paul LeFevre , Masters Mathematics, California State University, Fullerton (1987) · Author has 106 answers and 974K answer views · Updated 8y Related What is linear in linear algebra? This is a great question because the word “linear” is confusing and also fundamental—-not just because linear algebra is a well-understood subject to which mathematicians try to reduce other problems, but because the notion of a morphism that preserves properties (homomorphism) is fundamental to higher mathematical language. In seminars I attend, people will usually start off by writing: here are some objects, here are some properties they will be supposed to obey; here are transformations which preserve those properties and thus give us relationships between the members. [John Baez has noted This is a great question because the word “linear” is confusing and also fundamental—-not just because linear algebra is a well-understood subject to which mathematicians try to reduce other problems, but because the notion of a morphism that preserves properties (homomorphism) is fundamental to higher mathematical language. In seminars I attend, people will usually start off by writing: here are some objects, here are some properties they will be supposed to obey; here are transformations which preserve those properties and thus give us relationships between the members. [John Baez has noted that people who think this way sometimes forget to make sure that the supposed world has any members at all—-before forging ahead and inferring interesting things about these hypothetical cases.] Linear transformations—shear, rotation, reflection, and centred enlargement/shrinkage—do not distort things very much, but allow enough of a difference to result in equivalent but easier formulations of a problem (change of basis or isomorphism to an easier version of the same thing) —-for example ‎Alon Amit (אלון עמית)‎'s answer to Why would we want to transform the coordinates of a vector to another basis? Are there any real life examples where this may be necessary?. “Linear” in the sense of linear algebra does not quite mean lines [math]y=mx+b[/math]. The [math]b[/math] is always [math]b=0[/math] the [math]x[/math] can be multi-dimensional, meaning that we answer questions about planes and hyperplanes (thru the origin) as well as traditional lines the [math]m[/math] will be a matrix instead of a scalar (single number). This is a huge, huge difference. How to multiply matrices Because matrices keep track of so many interactions, linear algebra has practical value My answer to What is the point of linear algebra? Actually linear algebra treats not only multi-dimensional hyperplanes, but also abstract vectors —- i.e. “things that can be added”. That denotation subsumes infinite-dimensional things like colour (light, being a wave, has infinitely many places where it could wiggle = be different = vary), cardiograms (again heartbeats make sounds which are waveforms), …. —- as well as topological vector spaces (including Lev Vygotsky’s idea of the zone of proximal development in childrens’ education as well as an economic model of trade — nb, Derman objects to it) The abstract conception of linear is big enough to cover very different sorts of concepts: repeated addition of a constant [math]mx+b[/math] eg, [math]0+4+4+4+4+4+4+4[/math] (←there [math]b=0; m=4; x=7[/math]) bounded linear operator (pics by Dauger Research: Atom in a Box app) the derivative operator on [math]C^{\infty}[/math] functions mapping for example ℂ→ℂ repeated multiplication by a constant [math]C\cdot e^{k \cdot t}[/math], for example [math]100 \cdot 1.02 \cdot 1.02 \cdot 1.02 \cdot 1.02 \cdot 1.02[/math] (← there [math]C=100; k = \ln 1.02; t = 5[/math]) rotation (repeated multiplication by a matrix with determinant 1 and not the Identity Matrix) I wrote several blog posts about this: Why are rotations linear? Here’s a physically intuitive reason that... • shapes, figures & forms (and you can check out my whole linear tag, as well as the posts Space (for a related confusing but fundamental word, What is a "structure-preserving" map? and I used to be like the engineer in the joke about 4-D Kaluza-Klein theories If you want to go further in the direction of beauty, I recommend googling Representation Theory (mathematics) . Where linear algebra gets really interesting in my view is less the Homological Algebra direction than the A-D-E direction. See John Baez ‘s website for a good intro to ADE, then Jose Montesinos’ book Tessellations of 3-manifolds (Danny Calegari has a course on it) and Coxeter’s Regular Polytopes. If you want to go further practically, read about the Singular Value Decomposition and various other Matrix Factorization (including Compressed Sensing ). (My suggestions / opinion.) examples of other mathematics reducing, drawing on, or relating to linear algebra: knots — I think of knots as being the least linear category (although I haven’t thought hard about this — for example I couldn’t state a categorical property that really encapsulates “linearity” or “the opposite of linearity”. hmm.), but I almost certainly have at some point heard someone say “and from there it’s just linear algebra”. Conway’s skein relations are linear (are morphisms) on knots. HOMOLOGICAL ALGEBRA is technically a subset of linear algebra—it just looks at kernel, (^ pic of kernel), dimension, rank-nullity-theorem, stuff like that—but because homological algebra doesn’t require the “straight-line” condition (via A sudoku of linear functionals) and just imposing the “linearity” conditions of the origin and things that map into the origin, homological is looser and applies in more places. linear algebra over finite fields ties in to What is the difference between algebra, linear algebra, and abstract algebra? and eventually that will tie into Sridhar Ramesh Mathematician/Logician/All-Around Great Guy · Upvoted by Justin Rising , PhD in statistics and Jonathan Lewis , Masters Mathematics & Theoretical Physics, University of Cambridge (1988) · Author has 954 answers and 6.6M answer views · 13y Related Linear Algebra: What is a determinant of a matrix? First, let's examine what matrices "really are": When you multiply a matrix by the coordinates of a point, it gives you the coordinates of a new point. In this way, we can think of a matrix as a transformation which turns points in space into different points in space. And this is what matrix arithmetic is all about: matrices represent transformations (specifically, so-called "linear" transformations). The determinant of a transformation is just the factor by which it blows up volume (in the sense appropriate to the number of dimensions; "area" in 2d, "length" in 1d, etc.). If the determinant i First, let's examine what matrices "really are": When you multiply a matrix by the coordinates of a point, it gives you the coordinates of a new point. In this way, we can think of a matrix as a transformation which turns points in space into different points in space. And this is what matrix arithmetic is all about: matrices represent transformations (specifically, so-called "linear" transformations). The determinant of a transformation is just the factor by which it blows up volume (in the sense appropriate to the number of dimensions; "area" in 2d, "length" in 1d, etc.). If the determinant is 3, then it triples volumes; if the determinant is 1/2, it halves volumes, and so on. (The one nuance to add to this is that we are actually speaking about "oriented" volume. That is, our transformation may or may not turn figures inside out (e.g., in 2d, it might turn clockwise into counterclockwise; in 3d, it might turn left-hands into right-hands). If it does turn figures inside out, its determinant is considered negative.) So, for example, in 3d: any rotation has determinant 1 because it leaves volume unchanged. Scaling everything up by a factor of 2 has determinant [math]2^3[/math] because that's the factor by which it increases volume. Projecting everything onto a plane has determinant 0, because it flattens everything to volume 0. Any reflection has determinant -1, because it turns everything inside out but otherwise leaves volume unchanged. And so on... Finally, one useful observation: if you take a transformation which multiplies volume by, say, 5, and follow it up with a transformation which multiplies volume by 3, then overall, volume will multiply by 5 3. Multiplying matrices amounts to chaining transformations end-to-end in this manner, so the determinant of a product of matrices is the product of their determinants. [The reason a linear transformation has to blow up all regions' volumes by the same factor, should you care, is this: You can always think of any region as made up of lots of tiny regions, all identical except for their position. Each of these tiny regions is transformed in the same exact way except for position (this is the "linearity"), so the big region blows up by the same factor as each of the tiny regions. So, you can think of the determinant as the amount by which your favorite tiny shape scales in volume, and rest assured that all other regions scale by the same factor as well.] Promoted by Savings Pro Mark Bradley Economist · Updated Aug 14 What are the stupidest money mistakes most people make? Where do I start? I’m a huge financial nerd, and have spent an embarrassing amount of time talking to people about their money habits. Here are the biggest mistakes people are making and how to fix them: Not having a separate high interest savings account Having a separate account allows you to see the results of all your hard work and keep your money separate so you're less tempted to spend it. Plus with rates above 5.00%, the interest you can earn compared to most banks really adds up. Here is a list of the top savings accounts available today. Deposit $5 before moving on because this is one of th Where do I start? I’m a huge financial nerd, and have spent an embarrassing amount of time talking to people about their money habits. Here are the biggest mistakes people are making and how to fix them: Not having a separate high interest savings account Having a separate account allows you to see the results of all your hard work and keep your money separate so you're less tempted to spend it. Plus with rates above 5.00%, the interest you can earn compared to most banks really adds up. Here is a list of the top savings accounts available today. Deposit $5 before moving on because this is one of the biggest mistakes and easiest ones to fix. Overpaying on car insurance You’ve heard it a million times before, but the average American family still overspends by $417/year on car insurance. If you’ve been with the same insurer for years, chances are you are one of them. Pull up Coverage.com, a free site that will compare prices for you, answer the questions on the page, and it will show you how much you could be saving. That’s it. You’ll likely be saving a bunch of money. Here’s a link to give it a try. Consistently being in debt If you’ve got $10K+ in debt (credit cards…medical bills…anything really) you could use a debt relief program and potentially reduce by over 20%. Here’s how to see if you qualify: Head over to this Debt Relief comparison website here, then simply answer the questions to see if you qualify. It’s as simple as that. You’ll likely end up paying less than you owed before and you could be debt free in as little as 2 years. Missing out on free money to invest It’s no secret that millionaires love investing, but for the rest of us, it can seem out of reach. Times have changed. There are a number of investing platforms that will give you a bonus to open an account and get started. All you have to do is open the account and invest at least $25, and you could get up to $1000 in bonus. Pretty sweet deal right? Here is a link to some of the best options. Having bad credit A low credit score can come back to bite you in so many ways in the future. From that next rental application to getting approved for any type of loan or credit card, if you have a bad history with credit, the good news is you can fix it. Head over to BankRate.com and answer a few questions to see if you qualify. It only takes a few minutes and could save you from a major upset down the line. How to get started Hope this helps! Here are the links to get started: Have a separate savings account Stop overpaying for car insurance Finally get out of debt Start investing with a free bonus Fix your credit Richard Goldstone PhD in Mathematics, The Graduate Center, CUNY (Graduated 1995) · Upvoted by Justin Rising , PhD in statistics · Author has 1.8K answers and 3.8M answer views · Updated 5y Related Linear Algebra: What makes the column space and row space of a matrix have the same dimension? I don’t think row operations and pivot counts are “what makes” the row and column spaces isomorphic, those are computational ways of seeing it is so. In the context of real inner product spaces, I think the heart of the matter is the characterization of [math]A^T[/math] for a matrix [math]A,[/math] namely the fact that (confusing [math]A[/math] with a linear transformation [math]V \to W[/math]) [math]\langle Av, w \rangle = \langle v, A^T w \rangle[/math] for all [math]v \in V[/math] and all [math]w \in W.[/math] This quickly implies that the orthogonal complement of [math]\text{Col}(A)[/math] in [math]W[/math] is [math]\text{Null}(A^T),[/math] and so in particular that [math]\text{Null}(A^T) \cap \text{Col}(A)=\bigl{ \vec{0} \b[/math] I don’t think row operations and pivot counts are “what makes” the row and column spaces isomorphic, those are computational ways of seeing it is so. In the context of real inner product spaces, I think the heart of the matter is the characterization of [math]A^T[/math] for a matrix [math]A,[/math] namely the fact that (confusing [math]A[/math] with a linear transformation [math]V \to W[/math]) [math]\langle Av, w \rangle = \langle v, A^T w \rangle[/math] for all [math]v \in V[/math] and all [math]w \in W.[/math] This quickly implies that the orthogonal complement of [math]\text{Col}(A)[/math] in [math]W[/math] is [math]\text{Null}(A^T),[/math] and so in particular that [math]\text{Null}(A^T) \cap \text{Col}(A)=\bigl{ \vec{0} \bigr}. [/math] What makes this trivial intersection decisive is that it means that [math]A^T,[/math] restricted to [math]\text{Col}(A),[/math] is 1–1. Moreover, the above-mentioned orthogonal sum means that every element [math]w \in W[/math] can be written in the form [math]Av + w_0[/math] for some [math]v \in V[/math] and some [math]w_0 \in \text{Null}(A^T)[/math]. It follows that [math]A^Tw=A^T(Av+w_0)=A^T(Av);[/math] this says that the restriction of [math]A^T[/math] to [math]\text{Col}(A)[/math] maps [math]\text{Col}(A)[/math] onto [math]\text{Col}(A^T).[/math] Combining the two observations above, The restriction of [math]A^T[/math] to [math]\text{Col}(A)[/math] gives an isomorphism [math]\text{Col}(A) \cong \text{Col}(A^T)=\text{Row}(A).[/math] The argument for complex spaces is almost identical, with [math]A^T[/math] replaced by [math]A^H[/math], the conjugate-transpose of [math]A[/math]. Alon Amit 30 years of Linear Algebra. · Upvoted by Jay Wacker , theoretical physicist and Vrushabh Pawar , PhD Mathematics, Indian Institute of Science Education and Research, Bhopal (2027) · Author has 8.7K answers and 172.7M answer views · Updated 7y Related How do I determine the size (#of columns#of rows) of a matrix AB (where A and B are matrices)? The short answer is simple: an [math]m\times n[/math] matrix times an [math]n\times k[/math] matrix is an [math]m\times k[/math] matrix. The “middle number” disappears. (It is a universal convention to say that a matrix with [math]m[/math] rows and [math]n[/math] columns is [math]m\times n[/math].) If that middle number isn’t identical, you can’t multiply those matrices. The well-known formula for multiplication reveals this quite clearly: the middle index is summed over. [math]\displaystyle AB_{ij}=\sum_{p=1}^nA_{ip}B_{pj}[/math] But why is that? Why are things defined this way, and how can you understand them conceptually rather than by memorizing formulas? The explanation could ha The short answer is simple: an [math]m\times n[/math] matrix times an [math]n\times k[/math] matrix is an [math]m\times k[/math] matrix. The “middle number” disappears. (It is a universal convention to say that a matrix with [math]m[/math] rows and [math]n[/math] columns is [math]m\times n[/math].) If that middle number isn’t identical, you can’t multiply those matrices. The well-known formula for multiplication reveals this quite clearly: the middle index is summed over. [math]\displaystyle AB_{ij}=\sum_{p=1}^nA_{ip}B_{pj}[/math] But why is that? Why are things defined this way, and how can you understand them conceptually rather than by memorizing formulas? The explanation could have been straightforward, but some of our most deeply entrenched notational conventions add an annoying twist to it. Matrix multiplication represents functional composition, specifically composition of linear transformations. “Composition” is a very simple concept: you do one thing and then the other. So if you move from [math]m[/math] to [math]n[/math] and then from [math]n[/math] to [math]k[/math], overall you moved from [math]m[/math] to [math]k[/math], just like the rule of matrix multiplication. So this is all very natural. What’s the problem? Here’s the problem: the matrix [math]AB[/math] doesn’t represent going from [math]m[/math] to [math]k[/math], it represents going from [math]k[/math] to [math]m[/math]. Ask anyone to represent a mapping from [math]A[/math] to [math]B[/math] followed by a mapping from [math]B[/math] to [math]C[/math]. They will almost likely write down: [math]\displaystyle A \overset{f}{\to} B \overset{g}{\to} C \tag{}[/math] Now, what would you call the combined map, from [math]A[/math] straight to [math]C[/math]? It’s “doing [math]f[/math]”, followed by “doing [math]g[/math] ”. The most natural notation would be [math]fg[/math], or [math]f \circ g[/math], or something like that. However, the vast majority of mathematicians would call this map [math]gf[/math]. Why? Because we naturally show maps flowing to the right, but we write our functions to the left of the argument. The function [math]f[/math] takes elements from [math]A[/math] and produces elements from [math]B[/math]. If [math]a \in A[/math], [math]f[/math] takes it to the element [math]f(a)[/math]. Notice where [math]f[/math] is positioned relative to [math]a[/math]: to the left. Now where would [math]g[/math] take that? Well, [math]g[/math] takes any [math]b[/math] to [math]g(b)[/math], so it takes [math]f(a)[/math] to [math]g(f(a))[/math]. You must appreciate the visual discrepancy between [math]A \overset{f}{\to} B \overset{g}{\to} C[/math] and [math]g(f(a))[/math]. Some math writers were so bothered by this that they’ve decided to eschew [math]f(a)[/math] in favor of [math]af[/math]. This makes the composition look nicely consistent: starting with [math]a[/math] and applying [math]f[/math] and then [math]g[/math] gives you [math]afg[/math], so now the combined function really is [math]fg[/math]. Herstein’s “Topics in Algebra” is written this way, but many people today can’t stand this. I understand them. I also understand Herstein. Even if they don’t follow Herstein’s odd convention, some mathematicians write [math]g\circ f = gf[/math] while others mix the two, using the notation [math]f \circ g[/math] for the composition of [math]f[/math] followed by [math]g[/math], while still admitting that math(x) = g(f(x))[/math]. It sucks either way. But anyway, what does this have to do with the dimensions of a matrix product? Here’s a typical chain of linear transformations between vector spaces. [math]S[/math] maps a [math]k[/math]-dimensional space to an [math]n[/math]-dimensional space, while [math]T[/math] maps an [math]n[/math]-dimensional space to an [math]m[/math]-dimensional space. Naturally, then, doing [math]S[/math] followed by [math]T[/math] takes you from [math]k[/math] to [math]m[/math]. This is the heart of the reason the “middle index falls away”. When you think of linear transformations, it’s the most obvious thing in the world. But the linear transformation that maps the [math]k[/math]-dimensional space to the final [math]m[/math]-dimensional space here is, once again, [math]TS[/math], not [math]ST[/math]. This means that when we represent all of this with matrices, a matrix product [math]AB[/math] must represent doing [math]B[/math] first, and then [math]A[/math]. So [math]B[/math] will correspond to [math]S[/math] while [math]A[/math] must correspond to [math]T[/math]. And if we want to capture the “disappearing middle”, we must describe [math]A[/math] as “[math]m\times n[/math]”, and [math]B[/math] as “[math]n \times k[/math]”, which will then make [math]AB[/math] be “[math]m\times k[/math]”. In other words, an “[math]m \times n[/math]” matrix must represent a linear transformation from an [math]n[/math]-dimensional space to an [math]m[/math]-dimensional space, not the other way around. And this is why we let matrices act on vectors from the left: we write [math]Av[/math] and not [math]vA[/math]. So if [math]A[/math] is [math]m \times n[/math], it takes an [math]n[/math]-vector [math]v[/math] and produces an [math]m[/math]-vector [math]Av[/math]. This works fine according to our original convention: math\cdot(n \times 1)=m\times 1[/math]. And once again, this is why we have to write the vector being acted upon as a column vector, not a row vector. Note well: it’s perfectly legitimate to multiply a matrix by a row vector from the left. If we did that, the representation of [math]S[/math] followed by [math]T[/math] would be [math]AB[/math] where [math]A[/math] stands for [math]S[/math] and [math]B[/math] stands for [math]T[/math]. It would have been very nice, but almost nobody uses this convention, for the reasons explained. So, finally, the conventions to get used to: When describing a matrix, we always write or say the number of rows first. A “2 by 3” matrix has 2 rows and 3 columns. Matrices represent linear transformations by multiplying the coefficient vector as a column vector to the right of the matrix. An [math]m\times n[/math] matrix represents a transformation from a space with [math]n[/math] dimensions to one with [math]m[/math]. The matrix product [math]AB[/math] represents the transformation which does [math]B[/math] first, then [math]A[/math]. I know this is confusing. You’ll get used to it, but it’s better to realize that there is something to get used to. I personally prefer to write “an [math]m[/math]-dimensional…” rather than “a [math]m[/math]-dimensional”, because that’s how people say it. Not sure if there’s an actual standard around that, but thanks to those who suggested an edit! David Rutter B.S. Discrete Mathematics, Georgia Tech · Author has 9.2K answers and 11.7M answer views · 9y Related Why is linear algebra called "linear"? Linear algebra is the study and analysis of linear maps. Linear maps are the maps that treat a point as if it were a list of coordinate, and transform it by multiplying every coordinate by a (usually different) constant. You can differentiate them from affine maps (which are also sometimes called “linear” in high school algebra) because they always map the origin to itself. (Zero times anything is still zero!) “Linear maps” are called this because the only operation involved is multiplication of coordinates by constants. A one-dimensional linear map is one in which a single variable is multipli Linear algebra is the study and analysis of linear maps. Linear maps are the maps that treat a point as if it were a list of coordinate, and transform it by multiplying every coordinate by a (usually different) constant. You can differentiate them from affine maps (which are also sometimes called “linear” in high school algebra) because they always map the origin to itself. (Zero times anything is still zero!) “Linear maps” are called this because the only operation involved is multiplication of coordinates by constants. A one-dimensional linear map is one in which a single variable is multiplied by a constant. When such a map is graphed in a Cartesian plane, it yields a line through the origin. This is the metaphorical intuition that gives rise to the name. However, 1-D linear maps do not represent all lines (for that you need affine maps), nor are all linear maps representable by lines. In particular, linear algebra places no limit on the number of dimensions a linear function might map. A 2-D linear map is represented by a plane through the origin. An n-D linear map is represented as an n-D hyperplane through the origin. None these things are lines, per se, but they are the Cartesian product of lines, which is sufficient reason for us to metaphorically refer to them as linear. Ahmed Elbadawy BSE from Electronics and Communication Engineering (Graduated 2014) · 3y Related How do I write a C program to find sum of each row and column of a matrix? I hope you do more work on problems solving and algorithms, as the answer of this question isn’t hard ``` include #include #define columns 4 #define rows 5 int main(int argc, char argv) { int arr[rows][columns]; //filling random numbers from 0 to 98 in the array for (int i = 0; i < rows; i++) { for (int j = 0; j < columns; j++) { arr[i][j] = rand() % 99; } } //displaying the array! for (int i = 0; i < rows; i++) { for (int j = 0; j < columns; j++) { printf("%d ", arr[i][j]); } puts("\n"); } //summation of each row is printf("\nSummation of rows are\n"); for (int i = 0 ``` I hope you do more work on problems solving and algorithms, as the answer of this question isn’t hard ``` include #include #define columns 4 #define rows 5 int main(int argc, char argv) { int arr[rows][columns]; //filling random numbers from 0 to 98 in the array for (int i = 0; i < rows; i++) { for (int j = 0; j < columns; j++) { arr[i][j] = rand() % 99; } } //displaying the array! for (int i = 0; i < rows; i++) { for (int j = 0; j < columns; j++) { printf("%d ", arr[i][j]); } puts("\n"); } //summation of each row is printf("\nSummation of rows are\n"); for (int i = 0, sum = 0; i < rows; i++) { for (int j = 0; j < columns; j++) { sum += arr[i][j]; } printf("Row %d = %d\n", i + 1, sum); sum = 0; } //summation of each column is printf("\nSummation of columns are\n"); for (int i = 0, sum = 0; i < columns; i++) { for (int j = 0; j < rows; j++) { sum += arr[j][i]; } printf("column %d = %d\n", i + 1, sum); sum = 0; } return 0;} ``` Good luck. Related questions Consider a 3 × 3 matrix whose sum of elements is the same when added horizontally, vertically and diagonally. How can I fill a matrix whose column sum, row sum, and diagonals sum are the same? How do I solve 6x6 matrix? How do I find Matrix P that diagonalizes A = ⎡ ⎢ ⎣ 0 0 1 0 0 1 0 0 1 ⎤ ⎥ ⎦ ? When we raise a matrix, whose rows or columns sum to 1, to the power of 2, 3, ..., n, why does the resultant matrix tend to stabilize after some n? How do you inverse one block of a matrix from the inverse of the overall matrix (linear algebra, matrices, numerical methods, matrix equations, numerical linear algebra, math)? How do you handle huge matrix multiplication (linear algebra, matrix calculus, matrix decomposition, sparse matrices, math)? What kind of electrical matrix meets the criteria of "a more complex and dynamic operation"? How do I properly diagonalize this matrix? How do you find a matrix for least squares with a projection matrix (linear algebra, numerical linear algebra, math)? How do you solve the matrix [102,427,135]? How do you find the column rank of a matrix (linear algebra, matrices, vectors, matrix decomposition, matrix rank, math)? Who can I evaluate the following matrix? How would I solve this type of matrices? How do you finally escape the matrix if you become rich and understand the matrix systems? About · Careers · Privacy · Terms · Contact · Languages · Your Ad Choices · Press · © Quora, Inc. 2025
189348
https://arxiv.org/pdf/2306.12618
Mixed-model Sequencing with Stochastic Failures: A Case Study for Automobile Industry I. Ozan Yilmazlar a, ∗, Mary E. Kurz a, Hamed Rahimian a aDepartment of Industrial Engineering, Clemson University, Clemson SC, 29634 Abstract In the automotive industry, the sequence of vehicles to be produced is determined ahead of the production day. However, there are some vehicles, failed vehicles, that cannot be produced due to some reasons such as material shortage or paint failure. These vehicles are pulled out of the sequence, and the vehicles in the succeeding positions are moved forward, potentially result-ing in challenges for logistics or other scheduling concerns. This paper proposes a two-stage stochastic program for the mixed-model sequencing (MMS) problem with stochastic product failures, and provide improvements to the second-stage problem. To tackle the exponential number of scenarios, we employ the sample average approximation approach and two solution methodologies. On one hand, we develop an L-shaped decomposition-based algorithm, where the computational experiments show its superiority over solving the deterministic equivalent for-mulation with an off-the-shelf solver. Moreover, we provide a tabu search algorithm in addition to a greedy heuristic to tackle case study instances inspired by our car manufacturer partner. Numerical experiments show that the proposed solution methodologies generate high quality solutions by utilizing a sample of scenarios. Particularly, a robust sequence that is generated by considering car failures can decrease the expected work overload by more than 20% for both small- and large-sized instances. Keywords: Stochastic programming, Mixed-model sequencing, Branch-and-Benders-cut, Heuristics, Tabu Search Introduction Mixed-model assembly lines (MMAL) are capable of producing several configurations, mod-els, of a product. The number of models increases drastically as the complexity and customiz-ability of the product expand. The number of theoretical configurations of vehicles from a German car manufacturer is up to 10 24 (Pil and Holweg, 2004). Different configurations require distinct tasks at each station which induces high variation in the processing times, though each station has a fixed maximum time available. In fact, station workload is distributed through line balancing such that each station’s average workload conforms to this maximum time. When a station has more work allocated to it for a particular model ( work overload ), interventions are ⋆This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors. ∗Corresponding author Email addresses: iyilmaz@clemson.edu (I. Ozan Yilmazlar), mkurz@clemson.edu (Mary E. Kurz), hrahimi@clemson.edu (Hamed Rahimian) Preprint submitted to ArXiv June 23, 2023 arXiv:2306.12618v1 [cs.CY] 22 Jun 2023 needed to maintain the flow of products in the assembly line, thereby avoiding line stoppages. Interventions can be considered in advance, through sequencing decisions, or at the time of dis-ruption, through utility workers. When these interventions fail, the line may stop production until the situation is resolved. Thus, it is essential to distribute the high work-load models along the planning horizon to avoid line stoppages. The mixed-model sequencing (MMS) problem sequences products in an MMAL to minimize work overload at the stations. Data from our car manufacturing partner shows that the variation in processing times is high when customization appears on a main part, e.g., engine type: electric, diesel, gasoline, or hybrid. Car manufacturers have adapted their assembly lines for the mixed-model production of vehicles with diesel and gasoline engines. However, the assembly of electric vehicles (EV) in the same line has brought new challenges, while not eliminating the production of vehicles with diesel or gasoline engines. Unlike other vehicles, electric and hybrid vehicles have large batteries which causes a huge difference in tasks, e.g., at the station where the battery is loaded. As the proportion of electric and hybrid vehicles grows in a manufacturer’s mix, the impact of supply problems increases. Sometimes, a part is delayed from a supplier, so a designed sequence of vehicles will have a missing vehicle. Even if this vehicle has a gasoline or diesel engine, its absence may impact the battery-intensive stations. As a manufacturer’s mix of vehicles grows more specialized with more time-consuming content for a large subset, without alternative tasks for the vehicles without the specialized content, the impact of missing vehicles on a carefully designed sequence grows. Some vehicles in a production sequence may not be ready for assembly on the production day for various reasons, such as the body not being ready, paint quality issues, or material shortage. Such vehicles, referred to as failed vehicles , need to be pulled out of the sequence. The resulting gap is closed by moving the succeeding vehicles forward. This process and the resulting additional work overload occurrence is illustrated in Figure 1 for a battery loading station. The processing time at this station is longer than the cycle time for EVs and shorter than the cycle time for non-EVs, and assume that back-to-back EVs cause work overload. We schedule five vehicles, two electric and three non-electric. One of the non-EVs (third in both scheduled sequences) has a high failure probability. The initial sequences with no failures, while different, both will lead to no work overload. Assuming the third vehicle fails, we have different consequences for the resultant sequence of vehicles. In the non-robust sequence, removing the failed non-EV results in two EVs in a row, which will cause a work overload. However, the robust sequence, which is composed of the same vehicles in a different order, can withstand the failure of the third vehicle without causing a work overload. We refer to this sequence as the “robust” sequence because no work overload occurs when the vehicle with high failure probability is pulled out of the sequence. In this study, we generate robust sequences that consider the vehicles’ potential failures to reduce additional work overloads. We focus on the final assembly line, assuming that vehicles follow the same sequence as they arrive from the paint shop and resequencing is not an op-tion; when a vehicle is removed from the sequence, the following vehicles close the gap. The contributions of this study are as follows: • We provide a two-stage stochastic program for a MMS problem with stochastic product 2(a) non-robust sequence (b) robust sequence Figure 1: Illustration of a non-robust and robust sequence to stochastic failures failures, and we provide improvements to the second-stage problem. To the best of our knowledge, this is the first study that considers stochastic failures of products in MMS. • We adopt the sample average approximation (SAA) approach to tackle the exponential number of scenarios. The numerical experiments show that we can generate robust solu-tions with an optimality gap of less than 1% and 5% by utilizing a sample of scenarios, for the small-sized and industry-sized instances, respectively. • We develop an L-shaped decomposition-based algorithm to solve small-sized instances. The numerical results show that the L-shaped algorithm outperforms an off-the-shelf solver, solving the deterministic equivalent formulation (DEF), in terms of both quality and computational time. • To solve industry-sized instances, we propose a greedy heuristic and a tabu search (TS) algorithm that is accelerated in convergence with problem-specific tabu rules, and in ob-jective reevaluation each time a new solution is visited. • We conduct a case study with the data inspired by our car manufacturer industrial partner. The numerical experiments show that we can reduce the work overload by more than 20% by considering stochastic car failures and solving the corresponding problem with the proposed solution methodologies. The remainder of this paper is structured as follows. MMS related literature is reviewed in Section 2. The tackled problem is defined, and the mathematical formulation of the proposed problem is presented in Section 3. Exact and heuristic solution approaches in addition to the SAA approach are presented in Section 4. In Section 5, we execute numerical experiments to analyze the performance of proposed solution methodologies and present the results. Finally, a summary of our findings and a discussion about future work are given in Section 6. Related Work Manufacturers use various design configurations of MMALs to maximize their revenue. The optimization process of the assembly line sequencing takes these design configurations into ac-count. The first paper that articulates the MMS was presented by Kilbridge (1963). The researchers tackle the MMS with varied characteristics which required a systematic categoriza-tion of the components and the operating system of MMS problems. Dar-El (1978) categorizes the MMAL into four categories based on their main characteristics of assembly lines: product 3transfer system, product mobility on the conveyor, accessibility among adjacent stations, and the attribute of the launching period. An analytic framework for the categorization of Dar-El (1978) is given by Bard et al. (1992). Later, a survey is presented by Boysen et al. (2009), where they define tuple notation for the sequencing problems based on more detailed characteristics of assembly lines, including work overload management, processing time, concurrent work, line layout, and objective in addition to the main characteristics. Several objectives are employed to evaluate the performance of the assembly line sequence. The most common objective in the literature, also adopted in this study, is minimizing the total work overload duration, proposed by Yano and Rachamadugu (1991). Tsai (1995) describes hiring utility workers to execute tasks so that production delays are avoided, which leads to the objective of minimizing the total utility work duration. Fattahi and Salehi (2009) minimize the total idle time in addition to utility work. Boysen et al. (2011) propose minimizing the number of utility workers instead of the total utility work duration in order to improve utility worker management. A few exact solution methods are proposed in the literature to solve the deterministic MMS problem. Scholl et al. (1998) proposes a decomposition approach that uses patterns of different sequences, called pattern-based vocabulary building. They use a column generation method to solve the linear relaxation of the formulation and an informed tabu search is adapted to determine the pattern sequence. Bolat (2003) propose a job selection problem that is solved prior to the sequencing problem. They employ a due date-oriented cost function as an objective, and the work overload is restricted as a hard constraint. They develop a branch-and-bound (B&B) algorithm that is improved with some dominance criteria, a procedure to compare sequences based on the quality, which can select 50 jobs out of 100 in seconds. Kim and Jeong (2007) present a B&B algorithm to solve the MMS problem with sequence-dependent setup times. They calculate a lower bound on the work overload of the current sequence and the minimum possible work overload of the unconsidered configurations. The model can solve instances with up to five stations and 30 configurations. Boysen et al. (2011) integrate a skip policy for utility work into the MMS and formulate the new problem as an mixed-integer linear program (MILP). They propose a B&B algorithm with improved lower bound calculations and dominance rules. There are several heuristic and meta-heuristic approaches related to the MMS. The most popular algorithm is the genetic algorithm (GA) which is adapted in several ways to solve MMS problems (Hyun et al., 1998; Kim et al., 1996; Ponnambalam et al., 2003; Leu et al., 1996; Celano et al., 1999; Akg¨ und¨ uz and Tunalı, 2010; Zhang et al., 2020). Akg¨ und¨ uz and Tunalı (2011) review GA-based MMS solution approaches. Other popular evolutionary algorithms that are used to solve MMS include ant colony optimization (Akpınar et al., 2013; Zhu and Zhang, 2011; Kucukkoc and Zhang, 2014), particle swarm (Rahimi-Vahed et al., 2007a; Wei-qi et al., 2011), scatter search algorithm (Rahimi-Vahed et al., 2007b; Cano-Belm´ an et al., 2010; Liu et al., 2014), and imperialist competitive algorithm (Zandieh and Moradi, 2019). While the majority of the MMS literature focuses on models with deterministic parameters, there are a few studies that consider stochastic parameters on either processing times or demand. The seminal study with stochastic processing times is proposed by Zhao et al. (2007). They provide a Markov chain based approach that minimizes the expected work overload duration. 4This approximation is done by generating sub-intervals of possible positions of workers within the stations. The expected work overload is calculated based on the lower and upper bounds of the intervals. Mosadegh et al. (2017) propose a heuristic approach, inspired by Dijkstra’s algorithm, to tackle a single-station MMS with stochastic processing times. They formulate the problem as a shortest path problem. Mosadegh et al. (2020) formulate a multiple-station MMS with stochastic processing times as an MILP. They provide a Q-learning-based simulated annealing (SA) heuristic to solve industry-sized problems and show that the expected work overload is decreased compared to the deterministic problem. Brammer et al. (2022) propose a reinforcement learning approach to solve MMS by negatively rewarding work overload occur-rences. They show that the proposed approach provides at least 7% better solutions than SA and GA. Moreover, stochastic parameters are considered in integrated mixed-model balancing and sequencing problems as well: in processing times (Agrawal and Tiwari, 2008; ¨Ozcan et al., 2011; Dong et al., 2014) and in demand (Sikora, 2021). Although numerous studies have been conducted on the sequencing problems, only Hotten-rott et al. (2021) consider the product failures in sequence planning, yet in the car sequencing structure. To the best of our knowledge, there is no research available that establishes robust sequences in the MMS structure that can withstand work overloads caused by product failures. Problem Statement and Mathematical Formulation In Section 3.1, we define the MMS with stochastic failures and illustrate the problem with an example. Then, in Section 3.2, we provide a two-stage stochastic program for our problem. 3.1. Problem Statement In an MMAL, a set of workstations are connected by a conveyor belt. Products, launched with a fixed rate, move along the belt at a constant speed of one time unit (TU). The duration between two consecutive launches is called the cycle time c, and we define the station length lk ≥ c in TU as the total TU that the workpiece requires to cross the station k ∈ K. Operators work on the assigned tasks and must finish their job within the station length, otherwise, the line is stopped or a so-called utility worker takes over the remaining job. The excess work is called work overload . The sequence of products therefore has a great impact on the efficiency of the assembly line. MMS determines the sequence of a given set of products V by assigning each product v ∈ V to one of the positions t ∈ T .Formulating the MMS problem based on the vehicle configurations instead of vehicles is usual (Bolat and Yano, 1992; Bard et al., 1992; Scholl et al., 1998), however, automobile manu-facturers offer billions of combinations of options (Pil and Holweg, 2004). When this high level of customization is combined with a short lead time promised to the customers, each vehicle produced in a planning horizon becomes unique. In this study, the vehicles are sequenced instead of configurations since the case study focuses on an automobile manufacturing facility with a high level of customization. In order to do so, we define a binary decision variable xvt , which takes value of 1 if vehicle v ∈ V is assigned to position t ∈ T . The processing time of vehicle v ∈ V at station k ∈ K is notated by pkv . The starting position and work overload of the vehicle at position t ∈ T for station k ∈ K are represented by zkt and wkt , respectively. Table 1 lists all 5Sets and Index V, v Vehicles K, k Stations T, t Positions Ω, ω Scenarios Parameters pkv The processing time of vehicle v∈Vat station k∈KlkThe length of station k∈KcThe cycle time fvThe failure probability of vehicle v∈Vevω 1 if vehicle v∈Vexists at scenario ω∈Ω, 0 otherwise First-Stage Decision variables xvt 1 if vehicle v∈Vis assigned to position t∈T, 0 otherwise Second-Stage Decision variables wkt The work overload at station k∈Kat position t∈Tzkt Starting position of operator at station k∈Kat the beginning of position t∈Tbkt The processing time at station k∈Kat position t∈T Table 1: List of parameters and decision variables used in the model the parameters and decision variables used in the proposed model. While second-stage decision variables are scenario-dependent, we drop such dependency for notation simplicity throughout the paper unless it is needed explicitly for clarity. In this paper, we adopt the side-by-side policy as a work overload handling procedure. Autility worker is assumed to work with the regular worker side-by-side, enabling work to be completed within the station borders. The objective of MMS with the side-by-side policy is to minimize the total duration of work overloads, i.e., the total duration of the remaining tasks that cannot be completed within the station borders. The regular operator stops working on the piece at the station border so they can start working on the next workpiece at position lk − c in the same station. We illustrate an MMAL with the side-by-side policy in Figure 2 which represents a station that processes five vehicles. The left and right vertical bold lines represent the left and right borders of the station. Assume that the cycle time c is 7 and the station length lk is 10 TU, i.e., it takes 10 TU for the conveyor to flow through the station. This specific station processes two different configurations of vehicles: configurations A and B. While configurations A requires option 1, configurations B does not, so the processing times of configurations A and B are 9 and 5, respectively. Figure 2 illustrates the first five vehicles in the sequence which is [A, B, B, A, A]. The diagonal straight lines represent the position of the vehicle in the station. The worker always starts working on the first vehicle at position zero, left border of the station. The second vehicle is already at position 2 = 9 − c when the worker completes working on the first vehicle. Note that the next vehicle enters the station borders a cycle time after the current vehicle enters the station borders. The tasks of the second vehicle are completed when the third vehicle has just entered the station. The worker has 2 TU of idle time when the tasks of the third vehicle are completed. The worker starts working on the fifth vehicle on position 2 and the processing time of the fifth vehicle is 9 TU which causes a work overload of 1 TU, 2 + 9 − lk = 1. The job of processing these five vehicles could not be completed within the station borders but with the help of a utility worker, we assume that the job is completed at the station border at the cost of 1 TU work overload. The worker will continue working on the sixth vehicle at position 63 = lk − c, and this process continues. Figure 2: Illustration of mixed-model assembly line with five vehicles, cycle time c= 7, and station length lk= 10. From bottom to top, the diagonal lines correspond to vehicle configurations A, B, B, A, A We note that the vehicles usually go through the body shop and paint shop in the scheduled sequence before the assembly process. Hence, the failed vehicles must be pulled out of the sequence, and its position cannot be compensated, i.e., resequencing is not an option. It is assumed that each vehicle v ∈ V (with its unique mix of configurations) has a failure probability fv, and failures are independent of each other. The failures are related to the specifications of the vehicle, e.g., the increased production rate of EVs may induce higher failure rates, or painting a vehicle to a specific color may be more problematic. In our numerical experiments in Section 5, we estimate the failure probabilities from the historical data by doing feature analysis and using logistic regression. 3.2. Mathematical Model Under Uncertainty In this section, first, we provide a two-stage stochastic program for our problem. Next, we discuss improvements of the proposed formulation. Motivated by the dynamics of an MMAL, we formulate our problem as a two-stage stochastic program. The sequence of vehicles is decided in the first stage (here-and-now), before the car failures are realized. Once the car failures are realized, the work overload is minimized by determining the second-stage decisions (wait-and-see) given the sequence. First-stage decisions are determined by assigning each vehicle to a position such that the expected work overload in the second stage is minimized. To formulate the problem, suppose that various realizations of the car failures are represented by a collection of finite scenarios Ω. As each vehicle either exists or fails at a scenario, we have a total of 2 |V | scenarios. We let Ω = {ω1, . . . , ω 2|V | }, with ω indicating a generic scenario. To denote a scenario ω, let evω = 1 if vehicle v exists and evω = 0 if vehicle v fails at scenario ω ∈ Ω. We can then calculate the probability of scenario ω as ρω = Q|V | v=1 f 1−evω v (1 − fv)evω such that P ω∈Ω ρω = 1, where fv denotes the failure probability of vehicle v ∈ V .A two-stage stochastic program for the full-information problem , where all possible realiza-7tions are considered, is as follows: min x X ω∈Ω ρωQ(x, ω ) (1a) s.t. X v∈V xvt = 1 , t ∈ T (1b) X t∈T xvt = 1 , v ∈ V (1c) xvt ∈ { 0, 1}, t ∈ T, v ∈ V (1d) where Q(x, ω ) = min z,w,b X k∈K X t∈Tω wkt (2a) s.t. bkt = X v∈V pkv xvt , k ∈ K, t ∈ Tω (2b) zkt − zk(t+1) − wkt ≤ c − bkt , k ∈ K, t ∈ Tω, (2c) zkt − wkt ≤ lk − bkt , k ∈ K, t ∈ Tω, (2d) zk0 = 0 , k ∈ K, (2e) zk(|Tω |+1) = 0 , k ∈ K, (2f) zkt , w kt ≥ 0, k ∈ K, t ∈ Tω, (2g) In the first-stage problem (1), the objective function represents the expected work overload, i.e., the cost associated with the second-stage problem. Constraint sets (1b) and (1c) ensure that exactly one vehicle is assigned to each position and each position has exactly one vehicle. respectively. Constraint set (1d) presents the domain of the binary first-stage variable. The second-stage problem (2) minimizes the total work overload throughout the planning horizon, given the sequence and scenario ω ∈ Ω. Note that Tω denotes the set of positions of non-failed vehicles at scenario ω ∈ Ω, which is obtained by removing failed vehicles. Constraint set (2b) determines the processing time bkt at station k at position t. The starting position and workload of the vehicles at each station are determined by constraint sets (2c) and (2d), respectively. The constraint set (2e) ensures that the first position starts at the left border of the station. Constraint set (2f) builds regenerative production planning, in other words, the first position of the next planning horizon can start at the left border of the station. The constraint set (2g) defines the second-stage variables as continuous and nonnegative. Several remarks are in order regarding the proposed two-stage stochastic program (1) and (2). First, the number of decision variables and the set of constraints in (2) are scenario-dependent, as the valid positions Tω are obtained based on the failure scenario ω ∈ Ω. Second, the proposed two-stage stochastic program (1) and (2) has a simple recourse. That is, once the sequence is determined and the failures are realized, the work overloads are calculated from the sequence of the existing vehicles, without resequencing. In the remainder of this section, we first provide two modified models for the second-stage problem so that the number of decision variables and the set of constraints are no longer scenario-8dependent. Then, we provide two monolithic MILP formulations for the deterministic equivalent formulation (DEF) of the two-stage stochastic program of MMS with stochastic failures. For each scenario, we modify the second-stage problem by updating the processing times of failed vehicles instead of removing the failed vehicles. In Figure 3, we demonstrate how the original model (2) and modified models represent car failures. To this end, we consider the example given in Figure 2 and assume that the vehicle at the second position fails. In the original model, the failed vehicles are removed from the sequence and the succeeding vehicles moved forward (Figure 3a). The proposed modified models, referred to as standard model and improved model , are explained below. In Section 5.2, we discuss the impact of these modified models on the computational time and solution quality. (a) Original model (b) (Modified) standard model (c) (Modified) improved model Figure 3: Assembly line illustration of proposed models Standard Model: In a preprocessing step, the processing time of vehicle v is set to zero for all stations if the vehicle fails in scenario ω. Accordingly, since this modification is conducted by adding uncertainty to the processing times, the scenario index ω is added to the processing time. That is, evω = 0 ⇒ pkvω = 0 k ∈ K. Based on this modification, the second-stage problem for a given scenario ω can be presented as Q(x, ω ) = min w,z,b X k∈K X t∈T wkt (3a) s.t. bkt = X v∈V pkv xvt , k ∈ K, t ∈ T, (3b) zkt + bkt − wkt − zk(t+1) ≤ c, k ∈ K, t ∈ T, (3c) zkt + bkt − wkt ≤ lk, k ∈ K, t ∈ T, (3d) zk0 = 0 , k ∈ K, (3e) zk(T +1) = 0 , k ∈ K, (3f) zkt − βkbkt − zk(t+1) ≤ 0, k ∈ K, t = {1, . . . , T − 1} (3g) zkT − wkT − βkbkT ≤ 0, k ∈ K, (3h) zkt , w kt , b kt ≥ 0, k ∈ K, t ∈ T, (3i) The objective function (3a) and the constraints (3b)–(3f) are the same as in formulation (2), 9except the set of positions and the length of the sequence are not scenario-dependent anymore. Constraint set (3g) guarantees that the starting position at station k at position t + 1 equals the starting position of position t when the vehicle assigned to position t fails. Constraint set (3h) assures that the regenerative production planning is kept in case the vehicle at the end of the sequence fails. The parameter β is calculated in a way that βkbktn > z ktn which makes constraint sets (3g) and (3h) non-effective for the positions that have existing vehicles. Hence, βk equals the maximum possible starting position divided by the minimum processing time for station k, βk = lk −c min v∈V{pkv } Note that the processing times in this calculation are the actual processing times before the preprocessing step. Also, βk is well-defined as the the minimum processing time is strictly greater than zero. Figure 3b demonstrates that in the standard model, the processing time of the second vehicle is set to zero, so the operator starts working on the third vehicle at position two where the operator was going to start working on the second vehicle if it had not failed. Improved Model: In order to reduce the size of the standard model, we modify this model as follows. During the preprocessing step, the processing time of vehicle v is set to the cycle time for all stations if a vehicle fails at scenario ω. Let us refer to the vehicles with processing time equal to cycle time for all stations as “neutral” because these vehicles do not have any impact on the schedule in terms of work overload (see Proposition 1 and its proof). In other words, we transform failed vehicles into neutral vehicles, i.e., evω = 0 ⇒ pkvω = c k ∈ K. Proposition 1. A neutral vehicle has the same starting position as its succeeding vehicle at all stations. That is, bkt = c ⇒ zk(t+1) = zkt .Proof. The operator’s starting position of the vehicle at t + 1 is zk(t+1) = zkt + bkt − c − wkt .Assume that the vehicle at position t is a neutral vehicle. We have zk(t+1) = zkt − wkt . Hence, showing that the neutral vehicles never cause a work overload, wkt = 0 , completes the proof. We know that the maximum starting position at a station is max t∈T {zkt } = lk − c, which is a result of two extreme cases: an operator finishes working on a workpiece at the right border of a station or the operator cannot finish the work so we have a work overload. The starting position is less than lk − c for other cases. Therefore, a vehicle with a processing time less than or equal to c at a station cannot cause any work overload. This completes the proof. As a result of Proposition 1, constraints (3g) and (3h) can be removed from the standard model. Hence, the problem size is reduced. Figure 3c contains an illustration for Proposition 1. The second vehicle becomes neutral when its processing time is set to cycle time so that the third vehicle starts at the same position as the second vehicle. Using the standard or improved model, the DEF for MMS with stochastic failures can be obtained by adding the first-stage constraints (1b)–(1d) to the corresponding second-stage formulation, and by adding copies of all second-stage variables and constraints. We skip the details for brevity. Solution Approaches In Sections 4.1 and 4.2, we propose an L-shaped decomposition-based algorithm and a tabu search algorithm in addition to a greedy heuristic, respectively, to solve the models presented 10 in Section 3.2. Then, in Section 4.3, the SAA approach is motivated and a solution quality assessment scheme is presented. 4.1. Exact Solution Approach For the ease of exposition, we consider an abstract formulation of the two-stage stochastic program presented in Section 3.2 as follows: z∗ = min x∈X E[Q(x, ξ ω)] , (4) where x denotes the first-stage decisions variables and X := {x ∈ { 0, 1}|V |×| T | : Ax = b} is the feasible region of decision variables x, i.e., the set of points satisfying constraints (1b) - (1d). Moreover, we represent the second-stage problem for the standard or improved model, presented in Section 3.2, as Q(x, ξ ω) = min y {q⊤y|Dy ≥ hω − Tωx, y ≥ 0}, (5) where y represents the second-stage decision variables and ξω = ( hω, T ω). The expectation of the recourse problem becomes E[Q(x, ξ ω)] = P ω∈Ω ρωQ(x, ξ ω). The L-shaped method is a procedure that has been successfully used to solve large-scale two-stage stochastic programs. Note that for any ω ∈ Ω, function Q(x, ξ ω), defined in (5), is convex in x because x appears on the right-hand side of constraints. Hence, we propose to iteratively construct its underestimator. To this end, for each ω ∈ Ω and a given first-stage decision x ∈ X,we consider a subproblem that takes the form of (5). Moreover, we create a relaxed master problem , which contains a partial, but increasingly improving, representation of Q(x, ξ ω), for each ω ∈ Ω, through the so-called Benders’ cuts . Recall that our proposed two-stage stochastic programs have a relative complete recourse, that is, for any first-stage decision x, there is a feasible second-stage variable y. Thus, an underestimator of Q(x, ξ ω) can be constructed by only the so-called Benders’ optimality cuts .We now describe more details on our proposed L-shaped algorithm. We form the relaxed master problem for formulation (4) and (5) as follows: min x,θ X ω∈Ω ρωθω (6a) s.t. x ∈ X (6b) θω ≥ Gιωx + gιω, ι ∈ { 1, . . . , l }, ω ∈ Ω, (6c) where the auxiliary variable θω approximates the optimal value of the second-stage problem under scenario ω ∈ Ω, i.e., Q(x, ξ ω), through cuts θω ≥ Gιωx + gιω formed up to iteration l.Let (ˆ xι, ˆθι) be an optimal solution to the relaxed master problem (6). For each scenario ω ∈ Ω, we form a subproblem (5) at ˆ xι. Suppose that given ˆ xι, ˆ πιω denotes an optimal dual vector associated with the constraints in (5). That is, ˆ πιω is an optimal extreme point of the dual subproblem (DSP) max π {π⊤ ω (hω − Tω ˆxι)|π⊤ ω D ≤ q⊤, π ω ≥ 0}, (7) 11 where πω is the associated dual vector. Then, using linear programming duality, we generate an optimality cut as θω ≥ Gιωx + gιω, (8) where Gιω = −(ˆ πιω)⊤Tω and gιω = (ˆ πιω)⊤hω.Our proposed L-shaped algorithm iterates between solving the relaxed master problem (6) and subproblems (5) (one for each ω ∈ Ω) until a convergence criterion on the upper and lower bounds is satisfied. This algorithm results in an L-shaped method with multiple cuts. In order to exploit the specific structure of the MMS problem and to provide improvements on the dual problem, let us define variables πsp , πwo , πf s , πch , πsf , and πcf corresponding to starting position constraints (3c), work overload constraints (3d), first station starting position constraints (3e), regenerative production planning constraints (3f), starting position of the ve-hicles following a failed vehicle (3f), and regenerative production planning with failed vehicles (3g), respectively. The DSP for scenario ω ∈ Ω at a candidate solution ˆ xι, obtained by solving a relaxed master problem, can be formulated as follows: max π X k∈K X t∈T πsp kt (X v∈V pkv ˆxvt − c) + πwo kt (X v∈V pkv ˆxvt − lk) (9a) s.t. πsp k0 + πwo k0 + πf s k + πsf k0 ≤ 0, k ∈ K (9b) πsp kt − πsp k(t+1) − πwo k(t+1) + πsf kt − πsf k(t+1) ≤ 0, k ∈ K, t ∈ { 1, .., T − 1} (9c) πsp kT − πch k ≤ 0, k ∈ K (9d) πsp kt + πwo kt ≤ 1, k ∈ K, t ∈ { 1, .., T − 1} (9e) πsp kT + πwo kT + πcf k ≤ 1, k ∈ K (9f) πsp kt , π wo kt , π sf kt , π cf k ≥ 0, k ∈ K, t ∈ T (9g) πf s k , π ch k unrestricted , k ∈ K (9h) We provide improvements to the dual problem in several ways. The dual variables πf s and πcf are removed since the corresponding subproblem constraints (3f) and (3g) are eliminated in the improved model. The dual variables πf s and πch are not in the objective function and are unrestricted, which means that we can remove these variables and the constraints with those variables from the formulation without altering the optimal value of the problem. In our preliminary computational studies, we improved the dual subproblem by removing these variables. However, we observed that most of the DSPs have multiple optimal solutions, and as the number of vehicles and stations increase, it is more likely to have multiple optimal solutions. This naturally raises the question of what optimal dual vector provides the strongest cut, if we add only one cut per iteration per scenario. One can potentially add the cuts corresponding to all optimal dual extreme points, however, this results in an explosion in the size of the relaxed master problem after just a couple of iterations. While there is no reliable way to identify the weak cuts (Rahmaniani et al., 2017), we executed experiments in order to find a pattern for strong cuts. Our findings showed that adding the cut corresponding to the optimal dual extreme point with the most non-zero variables results in the fastest convergence. Thus, we added an ℓ1 regularization term to the objective function of the DSP, hence, the new objective 12 is encouraged to choose an optimal solution with the most non-zero variables. Accordingly, we propose an improved DSP formulation as follows: max π X k∈K X t∈T πsp kt (X v∈V pkv ˆxvt − c + ϵ) + πwo kt (X v∈V pkv ˆxvt − lk + ϵ) (10a) s.t. πsp kt − πsp k(t+1) − πwo k(t+1) ≤ 0, k ∈ K, t ∈ { 1, .., T − 1} (10b) πsp kt + πwo kt ≤ 1, k ∈ K, t ∈ T (10c) πsp kt , π wo kt ≥ 0, k ∈ K, t ∈ T (10d) 4.2. Heuristic Solution Approach MMS is an NP-hard problem, and stochastic failures of products (cars) increases the com-putational burden of solving the problem drastically. Hence, it is essential to create efficient heuristic procedures in order to solve industry-sized problems. In this section, we provide a fast and easy-to-implement greedy heuristic to find a good initial feasible first-stage decision (i.e., a sequence of vehicles) and an efficient tabu search (TS) algorithm to improve the solution quality. Although all |V |! vehicle permutations are feasible, the proposed greedy heuristic aims to find a good initial feasible solution. To achieve this, a solution is generated for the deterministic coun-terpart of the proposed MMS problem, which excludes vehicle failures. We refer to this problem as the one-scenario problem , since the corresponding problem has a single scenario with no failed vehicles. Assuming that the failure probability of each vehicle is less than or equal to 0.5, the scenario with no failed vehicles has the highest probability. Once such a feasible sequence of vehicles is generated, the TS algorithm improves this solution in two parts: first, over the one-scenario problem, and then, over the full-information problem. 4.2.1. Greedy Heuristic It is important for a local search heuristic algorithm to start with a good quality solution. A naive approach to generate an initial solution (sequence) is to always select the vehicle that causes the minimum new work overload for the next position. However, this approach is myopic since it only considers the current position. We remediate this issue by decreasing future work overloads which includes considering idle times and dynamic utilization rates. Accordingly, in order to generate a good initial solution, we propose utilizing an iterative greedy heuristic that follows a priority rule based on the work overload, idle time, and weighted sum of processing time, to be defined shortly. Before explaining our proposed greedy heuristic, let us define some technical terms. The idle time refers to the duration that an operator waits for the next vehicle to enter the station borders. The weight of processing times is determined by using station utilization rates , which is inspired by car sequencing problem utilization rates (Solnon, 2000; Gottlieb et al., 2003). We describe the utilization rate of a station as the ratio between the average processing time on a station and the cycle time, so the utilization rate of station k is P v∈V pkv /(|V | ∗ c). At each iteration, after a new assignment of a vehicle, the dynamic utilization rates are calculated by considering only the unassigned vehicles. Accordingly, the weighted sum of the processing time 13 of a vehicle v is calculated using (11): P k∈K pkv P i∈ˆV pki |K| ∗ | ˆV | ∗ c , (11) where ˆV denotes the set of unassigned vehicles. If the utilization rate of a station is greater than 1, then the average processing time is more than the cycle time, which induces an unavoidable work overload. On the other hand, a utilization rate close to 0 indicates that the average processing time is minimal compared to the station’s allocated time. Our proposed greedy heuristic builds a sequence iteratively, one position at a time, starting from the first position and iterating over positions. We use t to denote an iteration. At each iteration, t = 1 , . . . , T , the unassigned vehicles that cause the minimum new work overload are determined, denoted by Vt,wo . Ties are broken by selecting the vehicles from the set Vt,wo , which causes the minimum new idle time, the new set of vehicles is denoted by Vt,idle . In the case of ties, the vehicle with the highest weighted sum of processing time from the set Vt,idle is assigned to the position t of the sequence. Note that the first vehicle of the sequence is the vehicle with the highest weighted sum of processing time among the set V0,idle since there is no work overload initially. Finally, we enhance the proposed greedy heuristic by considering the category of the vehicles. Motivated by our case study, we categorize the vehicles based on the engine type, electric or non-electric, because the engine type is the most restrictive feature due to the high EV ratio (number of EVs divided by the number of all vehicles). Moreover, the engine type leads to different processing times on a specific station. Hence, we modify our greedy heuristic to first decide whether an EV or a non-EV should be assigned to the next position at each iteration. Accordingly, first, the EV ratio is calculated, and an EV is assigned to the first position. The procedure always follows the EV ratio. For example, if the EV ratio is 1 /3, an EV will be assigned to the positions 1 + 3 t where t = {1, . . . , |T | 3 − 1}. In case of a non-integer EV ratio, the position difference between any two consecutive EVs is the integer part of the ratio plus zero or one; randomly decided based on the decimal part of the ratio. Once the vehicle category is decided throughout the entire sequence, the specific vehicle to be assigned is selected based on the above-described procedure. We note that this enhancement in the greedy heuristic may be applied for any restrictive feature that causes large variations in processing times. Vehicle Engine p1 p2 AElectric 15 4BElectric 16 3CGasoline 210 DGasoline 38EGasoline 29FGasoline 47 Table 2: Illustration of greedy heuristic To describe the greedy heuristic, consider an example with six vehicles and two stations. The processing times and engine types of vehicles are given in Table 2. The cycle time is 7 TU, and the length of the stations are 20 TU and 10 TU, respectively. The EV ratio is 1/3. We consider only EVs for the first position, vehicle A is designated to the first position since it causes less 14 idle time than vehicle B. Next, none of the non-EVs causes work overload or idle time, so we assign the vehicle with the highest weighted sum of processing times to the second position, vehicle C. The procedure continues with another non-EV, and vehicle F is assigned to the third position because it is the only vehicle that does not cause any work overload. Consistent with the 1 /3 EV ratio, an EV must be assigned to the fourth position, and vehicle B is assigned to this position as it is the only EV left. Vehicle E is assigned to the fifth position due to its higher weighted sum of processing times. Finally, vehicle D is assigned to the last position. The resulting sequence is A-C-F-B-E-D with a work overload of 3 TU, only at position 6 at station 2. 4.2.2. Tabu Search Algorithm This section proposes a simulation-based local search algorithm on a very large neighborhood with tabu rules. The TS algorithm starts at the initial feasible solution (sequence) generated by the iterative greedy heuristic, and improves the initial solution via iterative improvements within the designed neighborhood. At each iteration of the TS, a transformation operator is randomly selected based on operator weights and applied to the incumbent solution to visit a random neighbor, respecting the tabu rules. The candidate solution is accepted if the objective function value is non-deteriorated, i.e., the candidate solution is rejected only if it has more total work overload. Then, another random operator is applied to the incumbent solution. This process repeats until the stopping criterion is met. As aforementioned, the TS has two parts. The first part acts as the second step of the initial solution generation procedure since it improves the solution provided by the greedy heuristic for the one-scenario problem. In our preliminary numerical experiments we observed that this step can drastically improve the initial solution quality. Hence, we conduct this step for a duration τone . Next, the algorithm makes a transition to the full-information problem and reevaluates the objective function value of the incumbent solution—the sequence generated by the first part of TS. In the second part of the TS algorithm, the objective function value corresponding to the sequence is evaluated for the full-information problem. To do this, we calculate the total work overload for all realizations ω ∈ Ω, given the first-stage decision (sequence). That is, we calculate the objective function of (3) for each realization ω ∈ Ω and take the weighted sum, each multiplied by the probability of the scenario. Observe from (3) that once the first-stage decision is fixed, the problem decomposes in scenarios and stations. Accordingly, the solution evaluation process is parallelized over scenarios and stations. The TS algorithm continues evaluating the solution for the full-information problem for a duration τf ull . The allocated time for the second part, τf ull , is much larger than that of the first part, τone , since iterating over one-scenario problem is much faster than that over a set of realizations. In the reminder of this section, we explain various components of the TS algorithm. Objective Evaluation. The objective function of the problem for a given scenario is the same as the objective given in (3a), total work overload over all stations and positions. Evaluation of the objective, after each movement, is the bottleneck of our algorithm since the new total work overload needs to be determined. Note that the objective evaluation starts at the first position and is executed iteratively since there is a sequence dependency. Accordingly, we propose to 15 reduce the computational burden in two ways. First, reevaluating the whole sequence is unnecessary since transformation operators make local changes in the sequence, i.e., some parts of the sequence remain unaltered and do not require reevaluation. Hence, we apply partial reevaluation after each movement. To explain partial reevaluation, assume that the vehicles at positions t1 and t2 are swapped. We certainly know that the subsequence corresponding to positions [1 , t 1 −1] is not impacted by the swap operation; hence, we do not reevaluate these positions. Additionally, we may not have to reevaluate all the positions in [ t1, t 2 − 1] and the positions in [ t2, |T |]. In each of these subsequences, there could be a reset position which ensures that there is no change in the objective from that position until the end of the subsequence. Since the rest of the subsequence after the reset position is not changed, we can jump to the end of the subsequence. To highlight how a partial reevaluation may improve the objective reevaluation process, suppose that the vehicles at positions 350 and 380 are swapped. We certainly know the subsequence corresponding to positions [1, 349] is not impacted by the swap. Additionally, in the case that there is a reset point before position 380 (and |T |), we do not have to reevaluate all the positions between 350 and 380, and the positions between 380 and |T |.Second, we calculate the objective function in an accelerated way. Traditionally work over-load and starting position for position t at station k, respectively wkt and zk(t+1) , are calculated as: wkt = zkt + bkt − lk and zk(t+1) = zkt + bkt − wkt − c, where wkt , z kt ≥ 0. Instead of calcu-lating work overload and starting position vectors separately, we propose using a single vector to extract these information, which in fact is a different representation of the starting position vector z. If there is a work overload at position t, then zk(t+1) = lk − c. Otherwise, if there is not any work overload at position t, then zk(t+1) = zkt + bkt − c, or we can equivalently write zk(t+1) = zk(t−1) + bk(t−1) − c + bkt − c − wk(t−1) . Again, if there is a work overload at position t − 1, then zk(t+1) = ( lk − c) + bkt − c, otherwise, if there is not any work overload at t − 1, then zk(t+1) = zk(t−1) + ( bk(t−1) − c) + ( bkt − c). Since we know that zk0 = 0, we can generalize it as zk(t+1) = Pth=1 (bkh −c), which is the cumulative sum of vector ηk = bk −c up to and including po-sition t. However, this generalization has the assumption that there is not any work overload or idle time up to position t. We note that there is an idle time at position t+1 when zkt +bkt −c < 0. Accordingly, we can write a general formula as zk(t+1) = max(0 , min( lk − c, z kt + ηk(t+1) )), which is referred to as the conditional cumulative sum of ηk up to position t. Intuitively, the conditional cumulative sum is defined as follows: starting from position 0, the cumulative sum is calculated iteratively within the closed range [0 , l k − c]. Whenever the cumulative sum exceeds the lower bound zero or the upper bound lk − c, we set the cumulative sum to the corresponding bound’s value. If the cumulative sum is below the lower bound, the excess value is equal to the idle time. Otherwise, if the cumulative sum is above the upper bound, the excess value is equal to the work overload. For example, if the cumulative sum is -2 at a position, the cumulative sum is set to zero and there is a 2 TU of idle time at that position. In light of the proposed improvements, the partial reevaluation process is executed in two subsequences, [ t1, t 2) and [ t2, |T |], assuming that t1 and t2 are the two selected positions by any transformation operator and t1 < t 2. The process starts at the first position, called position one, of the corresponding subsequence. We set zk0 = ηk1 and we calculate the starting position, 16 work overload, and idle time for the positions in the subsequence iteratively as mentioned above. The reevaluation of the subsequence is completed when either a reset position is found or the whole subsequence is iterated. A reset position occurs at position t differently in two cases as follows: 1) if zk,t +1 = 0 when the processing time on the starting position t1 (or t2) is decreased, 2) if the sum of idle time and work overload up to position t in the current subsequence exceeds the total increased processing time at the corresponding starting position t1 (or t2) when the processing time on the starting position t1 (or t2) is increased. Transformation Operators. In this section, we explain the details of the transformation opera-tors. We employ swap, forward and backward insertions, and inversion operators. The swap operator interchanges the positions of two randomly selected cars. Insertion removes a car from position i and inserts it at position j. Insertion is applied in two different directions, backward and forward. When i > j , the insertion is called a backward insertion, and all the vehicles be-tween the positions j and i move one position to the right , i.e., scheduled later. On the contrary, forward insertion occurs when i < j and all the vehicles between the positions i and j move one position to the left, i.e., scheduled earlier. Inversion takes two randomly selected positions in the sequence, and the subsequence between the selected positions is reversed. A repetitive application of these operators creates a very large neighborhood which helps the improvement procedure to escape local optima, especially when it is combined with a non-deteriorated so-lution acceptance procedure. The latter enables the algorithm to move on the plateaus that consist of the solutions with the same objective function value (see Section 5.3.2 for numerical experiments). Tabu List. We design the tabu list in a non-traditional manner. This list includes the move-ments that induce undesired subsequences. Based on our observations, we define an undesired subsequence as back-to-back EVs because consecutive EVs cause a tremendous amount of work overload at the battery loading station. Accordingly, any movement that results in back-to-back EVs is a tabu. For the sake of clarity, we describe tabu movements in detail for each operator separately in Appendix A. 4.3. SAA Approach and Solution Quality Assessment In (4), it is assumed that the probability of each scenario is known a priori, which may not hold in practice. In addition, the exponential growth of the number of scenarios causes an explosion in the size of the stochastic program. Hence, we utilize the SAA approach to tackle these issues. Consider the abstract formulation (4). The SAA method approximates the expected value function with an identically and independently distributed (i.i.d) random sample of N realizations of the random vector Ω N := {ω1, . . . , ω N } ⊂ Ω as follows: zN = min x∈X 1 N X ω∈ΩN Q(x, ξ ω). (12) The optimal value of (12), zN , provides an estimate of the true optimal value (Kleywegt et al., 2002). Let ˆ xN and x∗ denote an optimal solution to the SAA problem (12) and the true stochastic program (4), respectively. Note that E[Q(ˆ xN , ξ ω)] − E[Q(x∗, ξ ω)] is the optimality gap 17 of solution ˆ xN , where E[Q(ˆ xN , ξ ω)] is the (true) expected cost of solution ˆ xN and E[Q(x∗, ξ ω)] is the optimal value of the true problem (4). A high quality solution is implied by a small optimality gap. However, as x∗ (and hence, the optimal value of the true problem) may not be known, one may obtain a statistical estimate of the optimality gap to assess the quality of the candidate solution ˆ xN (Homem-de Mello and Bayraksan, 2014). That is, given that E[zN ] ≤ E[Q(x∗, ξ ω)], we can obtain an upper bound on the optimality gap as E[Q(ˆ xN , ξ ω)] − E[zN ]. We employ the multiple replication procedure of Mak et al. (1999) in order to assess the quality of a candidate solution by estimating an upper bound on its optimality gap. A pseudo-code for this procedure is given in Algorithm 1. We utilize MRP, in Section 5, to assess the quality of solutions generated by different approaches. Algorithm 1 Multiple Replication Procedure MRP α(ˆ x) Input: Candidate solution ˆ x, replication size M, and α∈(0 ,1). Output: A normalized %100(1 −α) upper bound on the optimality gap of ˆ x. for m= 1 ,2, . . . , M .do Draw i.i.d. sample Ω mNof realizations ξmω,ω∈ΩmN.Obtain zmN:= min x∈X 1 N P ω∈ΩmN Q(x, ξ mω). Estimate the out-of-sample cost of ˆ xas ˆ zmN:= 1 N P ω∈ΩmN Q(ˆ x, ξ mω). Estimate the optimality gap of ˆ xas GmN:= ˆ zmN−zmN. end for Calculate the sample mean and sample variance of the gap as ¯GN=1 M PMm=1 GmNand s2 G=1 M−1 PMm=1 (GmN−¯GN)2.Calculate a normalized %100(1 −α) upper bound on the optimality gap as 1¯zN ¯GN+tα;M−1sG√M  , where ¯ zN=1 M PMm=1 zmN. In addition, we propose an MRP integrated SAA approach for candidate solution generation and quality assessment, given in Algorithm 2. On the one hand, a candidate solution is generated by solving an SAA problem with a sample of N realizations. Then, we use the MRP to estimate an upper bound on the optimality gap of the candidate solution. If the solution is ϵ-optimal, i.e., estimated upper bound on its optimality gap is less than or equal to a ϵ threshold, the algorithm stops. Otherwise, the sample size increases until a good quality solution is found. The algorithm returns a candidate solution and its optimality gap. Algorithm 2 MRP integrated SAA Input: List of sample sizes Nlist and ϵ, α ∈(0 ,1). Output: Solution ˆ xand OptGap. for Nin Nlist do Obtain a candidate solution ˆ xNby solving the SAA problem (12). Calculate a normalized %100(1 −α) upper bound on the optimality gap as MRP α(ˆ xN). if MRP α(ˆ xN)≤ϵthen ˆx←ˆxNand OptGap ←MRP α(ˆ xN). exit for loop end if end for We end this section by noting that each of the DEF, presented in Section 3.2, the L-shaped algorithm, presented in Section 4.1, and the heuristic algorithm, presented in Section 4.2, can be used to solve the SAA problem and obtain a candidate solution. However, the probability of scenarios ω ∈ Ω, ρω, must change in the formulations so that it reflect the scenarios in a sample Ω N . Let ˆN and nω represent the set of unique scenarios in Ω N and the number of their occurrences. Thus, in the described DEF, L-shaped algorithm, and TS algorithm, P ω∈Ω ρω(·)18 changes to 1 N P ω∈ΩN (·) or equivalently, 1 N P ω∈ˆN nω(·). Accordingly, in the L-shaped method, we generate one optimality cut for each unique scenario ω ∈ ˆN by solving | ˆN | number of subproblems at each iteration. Numerical Experiments In Section 5.1, we describe the experimental setup. Then, in Section 5.2 and 5.3, we as-sess solution quality and computational performance of the proposed L-shaped and heuristic algorithms applied to a SAA problem, respectively. 5.1. Experimental Setup We generated real-world inspired instances from our automobile manufacturer partner’s as-sembly line and planning information. As given in Table 3, we generated three types of instances: (1) small-sized instances with 7-10 vehicles to assess the performance of L-shaped algorithm, (2) medium-sized instances with 40 vehicles to assess the performance of the TS algorithm for the one-scenario problem, (3) large-sized instances with 200, 300, and 400 vehicles to evaluate the performance of the TS algorithm. All instances have five stations, of which the first one is selected as the most restrictive station for EVs, the battery loading station. The rest are selected among other critical stations that conflict with the battery loading station. The cycle Instance Type |V | |K| Number of Instances Small 7, 8, 9, 10 5 30 × 4Medium 40 5 30 × 1Large 200, 300, 400 5 30 × 3 Table 3: Data sets time c is 97 TU, and the station length l is 120 TU for all but the battery loading station, which is two station lengths, 240 TU. The information about the distribution of the processing times is given in Table 4. It can be observed that the average and maximum processing times for each station are lower than the cycle time and the station length, respectively. Moreover, the ratio of the EVs is in the range of [0.25, 0.33] across all instances. Time (s) Station ID Min Mean Max 1 42.6 94.1 117.2 2 7.9 84.3 197.9 3 57.8 96.2 113.3 4 26.9 96.9 109.7 5 57.8 96.2 114.3 Table 4: Processing times distribution We derived the failure rates from six months of historical data by performing predictive feature analysis on vehicles. Based on the analysis, two groups of vehicles are formed according to their failure probabilities, low-risk and high-risk vehicles, whose failure probabilities are in the range of [0.0, 0.01] and [0.2, 0.35], respectively. The failure probability is mostly higher for recently introduced features, e.g., the average failure probability of EVs is 50% higher than that of other vehicles. High-risk vehicles constitute [0.03, 0.05] of all vehicles. However, this percentage increases to [0.15, 0.25] for the small-sized instances in order to have a higher rate 19 of failed vehicles. We note that the failures are not considered for the medium-sized instances since these instances are used for only the one-scenario problem, which does not involve failures by definition. The number of failure scenarios, 2 |V |, increases exponentially in the number of vehicles. Thus, we generated an i.i.d random sample of N realizations of the failure scenarios; hence, formed a SAA problem. For each failure scenario and vehicle, we first chose whether the vehicle was high risk or low risk (based on their prevalence). Then, depending on being a high-risk or low-risk vehicle, a failure probability was randomly selected from the respective range. Finally, it was determined whether the vehicle failed or not. In order to have a more representative sample of scenarios for large-sized instances, no low-risk vehicle was allowed to fail at any scenario. For each parameter configuration, we generated 30 instances. The vehicles of each instance were randomly selected from a production day, respecting the ratios mentioned above. The algorithms were implemented in Python 3. For solving optimization problems we used Gurobi 9.0. The time limit is 600 seconds for all experiments unless otherwise stated. We run our experiments on computing nodes of the Clemson University supercomputer. The experiments with the exact solution approach were run on nodes with a single core and 15 GB of memory, and the experiments with the heuristic solution approach were on on nodes with 16 cores and 125 GB of memory. 5.2. Exact Solution Approach In this section, we present results for the solution quality and computational performance of the L-shaped algorithm. We used MRP scheme, explained in Section 4.3, to assess the solution quality. We also compared the computational performance of the L-shaped algorithm with that of solving the DEF. We present the results for 120 small-sized instances consisting of 7 to 10 vehicles. We do not present the results for large-sized instances as our preliminary experiments showed that the number of instances that could be solved to optimality decreases drastically. We also point that instead of solving a relaxed master problem to optimality at each itera-tion of the L-shaped algorithm, one can aim for just obtaining a feasible solution ˆ x ∈ X. This may result in saving a significant amount of computational time that would be otherwise spent on exploring solutions that are already eliminated in previous iterations. This kind of imple-mentation, referred to as branch-and-Benders-cut (B&BC), is studied in the literature, see, e.g., (Hooker, 2011; Thorsteinsson, 2001; Codato and Fischetti, 2006). In our implementation of our proposed L-shaped algorithm, we used Gurobi’s Lazy constraint callback to generate cuts at a feasible integer solution found in the course of the branch-and-bound algorithm. 5.2.1. Solution Quality Figure 4 shows the impact of sample size on the solution quality of the SAA problem. Observe that the improvement in the upper bound of the optimality gap, the MRP output, as the sample size increases from 100 to 1000 progressively. We set the number of replications M to 30 and α = 0 .05 (95% confidence interval). While the mean of the optimality gap decreases gradually from 0.76% to 0.12%, a drastic enhancement is observed with the variance. We have 36 out of 120 solutions with an optimality gap of larger than 1% when the sample size is 100. However, all of the obtained solutions have less than a 1% optimality gap when the sample size is 1000. It can 20 be seen in the figure that good solutions can be obtained with a sample size of 100, yet it is not assured due to the high variance of the approximation. Consequently, the results suggest that the sample size should be increased until the variance of objective estimation is small enough. Figure 4: Solution quality of the SAA problem based on sample sizes Based on the results in Figure 4, we implemented the MRP integrated SAA scheme, presented in Section 4.3 and Algorithm 2, to balance the required computational effort and solution quality. We set α = 0 .05 (95% confidence interval) and ϵ = 0 .01 in MRP. While it is ensured that we obtain solutions within a 1% optimality gap, most of the solutions are found with the least computational effort, e.g., at the first iteration with a sample size of 100. In Table 5, we provide key performance results to show the performance of the MRP integrated SAA scheme, where the number of replications M is 30 and the MRP sample size N is 5000. The average value for the optimal value, accepted candidate solution’s expected objective value, and optimality gap are presented in Table 5. The sample size of the accepted candidate solutions is 84, 20, 11, and 5 for the sample sizes Nlist = {100 , 200 , 500 , 1000 }, respectively. The average optimality gap is 0.2% which shows that SAA can produce high-quality solutions. Statistical Lower Bound E[zN ]Estimated Objective Value E[Q(ˆ xN , ξ ω)] Optimality Gap ¯GN 55.80 55.91 0.11 (0.2%) Table 5: Solution quality of the MRP integrated SAA Additionally, we assess the solution quality of the one-scenario problem (i.e., the deterministic MMS problem without any car failure). Observe from Figure 5 that the average optimality gap is 23%, the maximum optimality gap is 274%, and the standard deviation is 39%. Comparing the performance of the SAA and the one-scenario problems shows that we can generate robust solutions by considering vehicle failures, which helps reduce work overloads by more than 20%. 5.2.2. Computational Performance In this section, we conduct different experiments to compare the DEF and L-shaped algo-rithm. On the one hand, we assess the impact of using the improved model, described in Section 3.2, obtained by setting the processing time of failed vehicles to the cycle time, and compare the results with those obtained using the standard model. The DEF corresponding to the stan-dard and improved models are denoted as Dstd and Dimp , respectively. Similarly, the L-shaped 21 Figure 5: Solution quality of the one-scenario problem algorithm corresponding to the standard and improved are denoted as Lstd and Limp . On the other hand, we assess the impact of our proposed cut selection strategy, described in Section 4.1, obtained using ℓ1 norm regularization to find a cut with the least number of zero coefficients. We used the cut selection strategy for the improved model, and denote the corresponding L-shaped algorithm as Limp −cs .In Table 6, we present the results on the impact of the improved model and cut selection strategy to compare the DEF and L-shaped algorithm for solving the SAA problem. We report the average and standard deviation of the computational time (in seconds), labelled as μt and σt,respectively, and the optimality gap, labelled as Gap (in percentage). Additionally, the number of instances that could not be solved optimally within the time limit is given in parenthesis under the Gap columns. All time results are the average of instances (out of 30 instances) that could be solved optimally within the time limit, while the Gap results are the average of the instances that could not be solved optimally. Based on the results in Section 5.2.1, we conducted the computational experiments on the SAA problem with sample sizes 100, 200, 500, and 1000. Dstd Dimp Lstd Limp Limp −cs |V|Nμt(s) σt(s) Gap (%) μt(s) σt(s) Gap (%) μt(s) σt(s) Gap (%) μt(s) σt(s) Gap (%) μt(s) σt(s) Gap (%) 7100 6.3 3.8 -1.4 1.1 -4.9 3.2 -1.2 0.5 -1.1 0.5 -200 10.2 6.4 -2.0 1.1 -7.9 5.7 -1.9 0.9 -1.6 0.6 -500 15.0 8.8 -3.2 1.8 -10.2 6.5 -2.4 1.0 -2.1 0.8 -1000 19.5 11.2 -4.4 2.5 -11.9 8.0 -2.8 1.1 -2.4 0.9 -8100 29.3 19.7 -11.4 10.5 -51.7 59.4 -9.4 6.6 -5.2 4.4 -200 50.5 31.5 -19.1 16.5 -83.2 83.4 -16.7 22.9 -8.1 6.9 -500 88.9 53.3 -32.3 24.9 -145.0 146.2 -24.9 26.4 -13.4 10.9 -1000 127.9 81.9 -44.3 35.9 -159.1 150.8 (1) 0.31 30.7 26.5 -15.6 14.2 -9100 170.3 157.6 (2) 0.33 35.3 27.3 -187.5 130.3 (9) 0.54 43.1 39.0 -25.0 19.1 -200 238.9 165.1 (7) 0.34 68.4 53.1 -315.6 170.2 (14) 0.54 80.0 96.4 -41.1 49.4 -500 263.6 140.3 (12) 0.38 122.6 120.3 (1) 0.20 357.0 170.0 (17) 0.55 105.0 79.9 (1) 0.16 58.6 49.1 -1000 317.7 140.4 (16) 0.44 204.2 153.5 (1) 0.13 366.3 198.4 (22) 0.55 155.6 100.9 (1) 0.07 87.1 66.5 -10 100 279.1 159.7 (12) 0.28 91.8 61.7 -486.5 253.8 (25) 0.61 169.3 137.1 (3) 0.16 133.7 152.2 -200 258.5 161.2 (23) 0.34 160.9 117.9 -344.2 361.7 (28) 0.61 238.5 179.6 (4) 0.18 158.8 138.3 (2) 0.16 500 565.2 58.4 (26) 0.48 245.3 151.4 (6) 0.13 283.9 0.0 (29) 0.69 264.4 193.9 (12) 0.18 223.5 170.9 (7) 0.16 1000 479.0 89.4 (28) 0.53 294.1 149.2 (13) 0.18 191.0 0.0 (29) 0.72 266.0 170.8 (18) 0.26 273.6 189.6 (8) 0.19 Table 6: Computational performance of the DEF and L-shaped algorithms for the SAA problem of small-sized instances Observe from Table 6 that using the improved model instead of the standard model decreased the computational time and optimality gap of both the DEF and L-shaped algorithm drastically. In particular, the solution time decreased for all instances. On average (over different |V | and N ), we observe a 67% and 70% decrease for the DEF and L-shaped algorithm, respectively. Additionally, the decrease in the standard deviation is around 64% and 74% for the DEF and L-shaped algorithm, respectively, when non-optimal solutions are left out. Moreover, the number of instances that could not be solved optimally is reduced by using the improved model: on 22 average, 83% and 78% of those instances are solved optimally with Dimp and Limp , respectively. Additionally, the remaining non-optimal solutions are enhanced as a reduction in optimality gaps is achieved. Another drastic improvement is provided by our cut selection strategy. Comparing Limp and Limp −cs in Table 6 shows that the mean and standard deviation of the computational time, and optimality gap are reduced by 35%, 33%, and 18%, on average. Furthermore, an optimal solution is found by Limp −cs for 56% of the instances that could not be solved optimally within the time limit by Limp .Finally, we compare Dimp and Limp −cs . Observe from Table 6 that Limp −cs resulted in lower mean computational time by 31%, 59%, 45%, and 1% for instances with 7, 8, 9, and 10 vehicles, respectively, and by 15%, 28%, 38%, and 45% for instances with 100, 200, 500, 1000 scenarios, respectively. In the same order, the variance decreased by 56%, 58%, 38%, and -52% for instances with 7,8,9, and 10 vehicles, respectively, and by -1%, 18%, 41%, and 42% for instances with 100, 200, 500, 1000 scenarios, respectively. We conclude that our L-shaped algorithm outperforms the DEF for instances with up to 9 vehicles, and they provide comparable results for instances with 10 vehicles. Additionally, the superiority of the L-shaped algorithm over the DEF escalates as the number of scenarios increases. 5.3. Heuristic Solution Approach In this section, we present results for the solution quality and computational performance of the TS algorithm. The solution quality is evaluated by employing the MRP scheme, explained in Section 4.3. We also assess the computational performance of the TS algorithm in various aspects. We set the operator selection probabilities (weights) based on our preliminary experiments. The weights of swap, forward insertion, backward insertion, and inversion are 0.45, 0.10, 0.15, 0.30, respectively. We set the time limit for the one-scenario problem to 10 seconds, i.e., τone = 10 seconds, which leaves τf ull = 590 seconds. Finally, based on the results of the quality assessment, we set the sample size N to 1000 for the computational performance assessments. 5.3.1. Solution Quality We solved the SAA problem of large-sized instances using the TS algorithm. To assess the solution quality, we then used the proposed MRP integrated SAA scheme, given in Algorithm 2, with the replication M = 30, MRP sample size N = 20000, α = 0 .05 (95% confidence interval), ϵ = 0 .05, and the list of sample sizes Nlist = {1000 , 2500 , 5000 }. Table 7 reports the key performance result to show the performance of the MRP integrated SAA scheme. While the maximum optimality gap is 3.7%, the average optimality gap is 0.28% which indicates that solving the SAA problem with the proposed TS algorithm can produce high-quality solutions. Figure 6 further shows that the optimality gap for most of the solutions is less than 1% with only five outliers out of 90 instances. Statistical Lower Bound E[zN ]Estimated Objective Value E[Q(ˆ xN , ξ ω)] Optimality Gap ¯GN 239.59 240.26 0.67 (0.28%) Table 7: Solution quality of the MRP integrated SAA 23 Figure 6: Solution quality of the SAA problem Moreover, we assess the solution quality of the one-scenario problem over large-sized in-stances by using the same procedure in order to show its robustness. Observe from Figure 7 that the average optimality gap is 24.8%, the maximum optimality gap is 76.5%, and the stan-dard deviation is 10.8%. Comparing the performance of the SAA and one-scenario problems demonstrates that we can generate robust solutions by considering vehicle failures. Accordingly, we reassure with the industry-sized instances that we can reduce the work overloads by more than 20% by considering stochastic car failures and solving the corresponding problem efficiently. Figure 7: Solution quality of one-scenario problem 5.3.2. Computational Performance To assess the computational performance of the TS algorithm, we conducted different tests: 1) compared the solution found with the TS algorithm with the solution found by an off-the-shelf solver for the one-scenario problem of medium-sized instances, 2) compared the solution found with the TS algorithm with the optimal solution found by Limp −cs approach for the SAA problem of small-sized instances, 3) compared the solution found with the TS algorithm with that of a simulated annealing (SA) algorithm for the SAA problem of large-sized instances, 4) and analyzed the convergence of TS algorithm for the one-scenario and SAA problems of large-sized instance. We executed 30 runs for each of the instances and tests. Table 8 reports the results for the first set of computational experiments for the one-scenario problem. We note that we generated 30 instances that could be solved within three hours time limit with Gurobi (solving the DEF). The minimum, average, maximum, and standard deviation of the computational time (in seconds) are shown for the TS algorithm and Gurobi. Additionally, the average number of movements for the TS algorithm is reported in order to provide some insight about the efficiency of the implementation. The optimal solutions, for all 30 instances and for all 30 runs, are found in under 10 seconds by the TS algorithm. The average computational times are 1140 and 0.33 seconds for the Gurobi solver and TS algorithm, respectively. These results show that the proposed TS algorithm can consistently provide optimal solutions to the one-scenario problem in a very short amount of time by avoiding any local optima. Recall that as demonstrated in Section 5.2.2, Limp −cs outperformed Gurobi in solving the 24 Time (s) Move (#) Method Min Mean Max Std. Dev. Mean Gurobi 2.37 1140.91 9373.08 2613.95 -TS 6e-4 0.33 9.44 0.81 16940 Table 8: Computational performance of Gurobi and TS for the one-scenario problem of medium-sized instances Time (s) Optimality Gap (%) Optimally Solved (%) Move (#) Method Min Mean Max Std. Dev. Mean Max Std. Dev. Mean Gurobi 5.02 68.91 210.06 50.99 000100 -TS 1e-4 0.08 0.28 0.08 0.14 2.79 0.41 83 628 Table 9: Computational performance of Gurobi and TS for the SAA problem of smalls-sized instances SAA problem for small-sized instances. In the second set of experiments, we compared the computational effectiveness of solving the SAA problem of small-sized instances with the TS algorithm and Limp −cs . To this end, we chose a sample size of 1000 and solved 30 small-sized instances optimally by using Limp −cs . We then solved the same set of instances using the TS algorithm until either an optimal solution was obtained or the time limit was reached. Table 9 shows that TS algorithm was able to find optimal solutions in 83% of the experiments, 746 out of 900. However, the average optimality gap for these non-optimal solutions is 0.14%, indicating that the TS algorithm is reliable in terms of optimality. For the TS algorithm, we recorded the computational time as the time until the last improvement is done, so we can see that the TS algorithm is very efficient, with an average runtime of 0.08 seconds. We observed that the TS algorithm terminated at a local optima often when the number of vehicles is very small. Our hypothesis is that there are plateaus with the same objective value when the number of vehicles is large. However, this is not the case when there are only 10 vehicles as there are a limited number of sequences and each sequence has a different objective function value. Hence, in the third set of experiments, we compared the TS algorithm and a SA algorithm for large-sized instances to analyze the impact of the tabu list and accepting worse solutions on escaping a local optima. We utilized the same local search algorithm for the SA with two differences: 1) disabled the tabu rules and 2) enabled accepting worse solutions based on the acceptance criterion. For the SA algorithm, we set the starting temperature Tinit = 10. We also adopted a geometric cooling with the cooling constant α = 0 .999. The acceptance ratio is calculated using the Boltzmann distribution P (∆ f, T ) = e− f (x′)−f (x) T , where x′ is a new solution, x is the incumbent solution, and T is the current temperature. We note that the Kruskal-Wallis test shows no significant difference between the compu-tational performance of the TS and SA algorithms at a 95% confidence level. However, as illustrated in Figure 8, the TS algorithm produces better results and converges faster than the SA algorithm (averaged over of 900 runs). This result shows that the proposed TS algorithm is capable of exploiting the search space while generally avoiding premature convergence to local optima. Accordingly, we conclude that there is no need to accept worse solutions in the local search. Finally, in the last set of experiments, we conducted an analysis in order to provide insight into the reliability of the proposed TS algorithm’s convergence. In particular, in Figure 9, we presented the box plots of the standard deviation of the objective values for the one-scenario 25 Figure 8: Convergence comparison of the TS and SA algorithms and SAA problems of large-sized instances. Each data point represents the standard deviation of 30 runs (for each of the 90 large-sized instances). The average standard deviations for the one-scenario and SAA problems are 0.19 and 0.93, while the means of the objective values are 212.77 and 239.69, respectively. Accordingly, the average coefficient of variations, the ratio between the average standard deviation and mean, are 9e-4 and 4e-3, which indicates that the proposed TS algorithm provides highly reliable results for both of the problems. Figure 9: Convergence of the objective value with TS algorithm for the one-scenario and SAA problems Conclusion This paper studied mixed-model sequencing (MMS) problem with stochastic failures. To the best of our knowledge, this is the first study that considers stochastic failures of products in MMS. The products (vehicles) fail according to various characteristics and are then removed from the sequence, moving succeeding vehicles forward to close the gap. Vehicle failure may cause extra work overloads that could be prevented by generating a robust sequence at the beginning. Accordingly, we formulated the problem as a two-stage stochastic program, and improvements were presented for the second-stage problem. We employed the sample average approximation approach to tackle the exponential number of scenarios. We developed L-shaped decomposition-based algorithms to solve small-sized instances. The numerical experiments showed that the L-shaped algorithm outperforms the deterministic equivalent formulation, solved with an off-the-shelf solver, in terms of both solution quality and computational time. To solve industry-26 sized instances efficiently, we developed a greedy heuristic and a tabu search algorithm that is accelerated with problem-specific tabu rules. Numerical results showed that we can provide good quality solutions, with less than a 5% statistical optimality gap, to industry-sized instances in under ten minutes. The numerical experiments also indicated that we can generate good quality robust solutions by utilizing a sample of scenarios. In particular, we can reduce the work overload by more than 20%, for both small- and large-sized instances, by considering possible car failures. 6.1. Managerial Insights Car manufacturers are facing several challenges due to the increasing ratio of EVs in pro-duction. EVs have significant differences compared to non-EVs which require specific treatment when creating the sequence for the mixed-model assembly line. One of the main challenges is the battery installation process. Unlike traditional vehicles, EVs have large and heavy batteries that need to be installed in a specific order to avoid damaging the vehicle or the battery itself. Accordingly, a huge processing time variation at the battery loading station arises from this difference, which requires special treatment to ensure that the assembly line can continue to produce high-quality vehicles efficiently. We have observed that consecutive EVs induce a significant amount of work overload, which generally requires line stoppage even with the help of utility workers. Planning the sequence by ruling out back-to-back EVs does not guarantee that there will not be any occurrence of consecutive EVs. The failure of vehicles disrupts the planned sequence, and the necessity of considering car failures during the planning process increases as the difference between product types expands. In this study, we focused on generating robust schedules that take into account the possi-ble deleterious effects resulting from the divergence between electric and non-electric vehicles. However, it is worth noting that our proposed solution methodologies are equally applicable to any feature that causes similar variations at specific work stations. 6.2. Future Research One direction for future research is the reinsertion of the failed vehicles back into the se-quence. Even though the reinsertion process is being conducted via a real-time decision, a robust approach may increase the efficiency of production. Another direction for future research is to include stochastic processing times in addition to stochastic product failures. This may potentially generate more robust schedules, particularly in case a connection between failures and processing times is observed. Finally, there are similarities between MMS and some variants of the traveling salesman problem (TSP). Since the TSP is one of the most studied combinato-rial optimization problems, the state-of-art solution methodologies presented for TSP may be adapted to MMS. Acknowledgements The authors acknowledge Clemson University for generous allotment of compute time on Palmetto cluster. 27 Appendix A. Tabu List for Local Search Algorithm Assume that we have two positions is selected for any operator to be applied, t1, t 2 with t1 < t 2. The tabu movements for each operator is described below under two cases: the vehicle at the position t1 is 1) an EV, 2) not an EV. Swap 1) The position t2 cannot be a neighbor of an EV, e.g., the vehicles at the positions t2 − 1 and t2 + 1 cannot be an EV. 2) The vehicle at the position t2 cannot be an EV if the position t1 is a neighbor of an EV. Forward Insertion 1) The vehicles at the positions t2 or t2 − 1 cannot be an EV. 2) At most one of the vehicles at the positions t1 − 1 and t1 + 1 can be an EV. Backward Insertion 1) The vehicles at the positions t1 or t1 − 1 cannot be an EV. 2) At most one of the vehicles at the positions t2 − 1 and t2 + 1 can be an EV. Inversion 1) The position t2 cannot be a left neighbor of an EV, e.g., the vehicle at the position t2 + 1 cannot be an EV. 2) If the vehicle at the position t1 is a right neighbor of an EV, then the vehicle at the position t2 cannot be an EV. References Agrawal, S., Tiwari, M., 2008. A collaborative ant colony algorithm to stochastic mixed-model u-shaped disassembly line balancing and sequencing problem. International Journal of Pro-duction Research 46, 1405–1429. Akg¨ und¨ uz, O.S., Tunalı, S., 2010. An adaptive genetic algorithm approach for the mixed-model assembly line sequencing problem. International Journal of Production Research 48, 5157– 5179. Akg¨ und¨ uz, O.S., Tunalı, S., 2011. A review of the current applications of genetic algorithms in mixed-model assembly line sequencing. International Journal of Production Research 49, 4483–4503. Akpınar, S., Bayhan, G.M., Baykasoglu, A., 2013. Hybridizing ant colony optimization via genetic algorithm for mixed-model assembly line balancing problem with sequence dependent setup times between tasks. Applied Soft Computing 13, 574–589. Bard, J.F., Dar-Elj, E., Shtub, A., 1992. An analytic framework for sequencing mixed model assembly lines. The International Journal of Production Research 30, 35–48. Bolat, A., 2003. A mathematical model for selecting mixed models with due dates. International Journal of Production Research 41, 897–918. Bolat, A., Yano, C.A., 1992. Scheduling algorithms to minimize utility work at a single station on a paced assembly line. Production Planning & Control 3, 393–405. Boysen, N., Fliedner, M., Scholl, A., 2009. Sequencing mixed-model assembly lines: Survey, classification and model critique. European Journal of Operational Research 192, 349–373. Boysen, N., Kiel, M., Scholl, A., 2011. Sequencing mixed-model assembly lines to minimise the number of work overload situations. International Journal of Production Research 49, 4735–4760. Brammer, J., Lutz, B., Neumann, D., 2022. Stochastic mixed model sequencing with multiple stations using reinforcement learning and probability quantiles. OR Spectrum 44, 29–56. 28 Cano-Belm´ an, J., R´ ıos-Mercado, R.Z., Bautista, J., 2010. A scatter search based hyper-heuristic for sequencing a mixed-model assembly line. Journal of Heuristics 16, 749–770. Celano, G., Fichera, S., Grasso, V., La Commare, U., Perrone, G., 1999. An evolutionary approach to multi-objective scheduling of mixed model assembly lines. Computers & Industrial Engineering 37, 69–73. Codato, G., Fischetti, M., 2006. Combinatorial benders’ cuts for mixed-integer linear program-ming. Operations Research 54, 756–766. Dar-El, E.M., 1978. Mixed-model assembly line sequencing problems. Omega 6, 313–323. Dong, J., Zhang, L., Xiao, T., Mao, H., 2014. Balancing and sequencing of stochastic mixed-model assembly u-lines to minimise the expectation of work overload time. International Journal of Production Research 52, 7529–7548. Fattahi, P., Salehi, M., 2009. Sequencing the mixed-model assembly line to minimize the total utility and idle costs with variable launching interval. The International Journal of Advanced Manufacturing Technology 45, 987–998. Gottlieb, J., Puchta, M., Solnon, C., 2003. A study of greedy, local search, and ant colony optimization approaches for car sequencing problems, in: Workshops on Applications of Evo-lutionary Computation, Springer. pp. 246–257. Hooker, J., 2011. Logic-based methods for optimization: combining optimization and constraint satisfaction. John Wiley & Sons. Hottenrott, A., Waidner, L., Grunow, M., 2021. Robust car sequencing for automotive assembly. European Journal of Operational Research 291, 983–994. Hyun, C.J., Kim, Y., Kim, Y.K., 1998. A genetic algorithm for multiple objective sequencing problems in mixed model assembly lines. Computers & Operations Research 25, 675–690. Kilbridge, M., 1963. The assembly line model-mix sequencing problem, in: Proceedings of the 3rd International Conference on Operations Research, 1963. Kim, S., Jeong, B., 2007. Product sequencing problem in mixed-model assembly line to minimize unfinished works. Computers & Industrial Engineering 53, 206–214. Kim, Y.K., Hyun, C.J., Kim, Y., 1996. Sequencing in mixed model assembly lines: a genetic algorithm approach. Computers & Operations Research 23, 1131–1145. Kleywegt, A.J., Shapiro, A., Homem-de Mello, T., 2002. The sample average approximation method for stochastic discrete optimization. SIAM Journal on Optimization 12, 479–502. Kucukkoc, I., Zhang, D.Z., 2014. Simultaneous balancing and sequencing of mixed-model parallel two-sided assembly lines. International Journal of Production Research 52, 3665–3687. Leu, Y.Y., Matheson, L.A., Rees, L.P., 1996. Sequencing mixed-model assembly lines with genetic algorithms. Computers & Industrial Engineering 30, 1027–1036. Liu, Q., Wang, W.x., Zhu, K.r., Zhang, C.y., Rao, Y.q., 2014. Advanced scatter search approach and its application in a sequencing problem of mixed-model assembly lines in a case company. Engineering Optimization 46, 1485–1500. Mak, W.K., Morton, D.P., Wood, R.K., 1999. Monte Carlo bounding techniques for determining solution quality in stochastic programs. Operations Research Letters 24, 47–56. Homem-de Mello, T., Bayraksan, G., 2014. Monte carlo sampling-based methods for stochastic optimization. Surveys in Operations Research and Management Science 19, 56–85. Mosadegh, H., Fatemi Ghomi, S., S¨ uer, G.A., 2017. Heuristic approaches for mixed-model sequencing problem with stochastic processing times. International Journal of Production Research 55, 2857–2880. Mosadegh, H., Ghomi, S.F., S¨ uer, G.A., 2020. Stochastic mixed-model assembly line sequencing problem: Mathematical modeling and q-learning based simulated annealing hyper-heuristics. European Journal of Operational Research 282, 530–544. 29 ¨Ozcan, U., Kelleg¨ oz, T., Toklu, B., 2011. A genetic algorithm for the stochastic mixed-model u-line balancing and sequencing problem. International Journal of Production Research 49, 1605–1626. Pil, F.K., Holweg, M., 2004. Linking product variety to order-fulfillment strategies. Interfaces 34, 394–403. Ponnambalam, S., Aravindan, P., Rao, M.S., 2003. Genetic algorithms for sequencing problems in mixed model assembly lines. Computers & Industrial Engineering 45, 669–690. Rahimi-Vahed, A., Mirghorbani, S., Rabbani, M., 2007a. A new particle swarm algorithm for a multi-objective mixed-model assembly line sequencing problem. Soft Computing 11, 997–1012. Rahimi-Vahed, A.R., Rabbani, M., Tavakkoli-Moghaddam, R., Torabi, S.A., Jolai, F., 2007b. A multi-objective scatter search for a mixed-model assembly line sequencing problem. Advanced Engineering Informatics 21, 85–99. Rahmaniani, R., Crainic, T.G., Gendreau, M., Rei, W., 2017. The benders decomposition algorithm: A literature review. European Journal of Operational Research 259, 801–817. Scholl, A., Klein, R., Domschke, W., 1998. Pattern based vocabulary building for effectively sequencing mixed-model assembly lines. Journal of Heuristics 4, 359–381. Sikora, C.G.S., 2021. Benders’ decomposition for the balancing of assembly lines with stochastic demand. European Journal of Operational Research 292, 108–124. Solnon, C., 2000. Solving permutation constraint satisfaction problems with artificial ants, in: European Conference on Artificial Intelligence, Citeseer. pp. 118–122. Thorsteinsson, E.S., 2001. Branch-and-check: A hybrid framework integrating mixed integer programming and constraint logic programming, in: International Conference on Principles and Practice of Constraint Programming, Springer. pp. 16–30. Tsai, L.H., 1995. Mixed-model sequencing to minimize utility work and the risk of conveyor stoppage. Management Science 41, 485–495. Wei-qi, L., Qiong, L., Chao-yong, Z., Xin-yu, S., 2011. Hybrid particle swarm optimization for multi-objective sequencing problem in mixed model assembly lines. Computer Integrated Manufacturing System 17, 0. Yano, C.A., Rachamadugu, R., 1991. Sequencing to minimize work overload in assembly lines with product options. Management Science 37, 572–586. Zandieh, M., Moradi, H., 2019. An imperialist competitive algorithm in mixed-model assembly line sequencing problem to minimise unfinished works. International Journal of Systems Science: Operations & Logistics 6, 179–192. Zhang, B., Xu, L., Zhang, J., 2020. A multi-objective cellular genetic algorithm for energy-oriented balancing and sequencing problem of mixed-model assembly line. Journal of Cleaner Production 244. Https://doi.org/10.1016/j.jclepro.2019.118845. Zhao, X., Liu, J., Ohno, K., Kotani, S., 2007. Modeling and analysis of a mixed-model assembly line with stochastic operation times. Naval Research Logistics 54, 681–691. Zhu, Q., Zhang, J., 2011. Ant colony optimisation with elitist ant for sequencing problem in a mixed model assembly line. International Journal of Production Research 49, 4605–4626. 30
189349
https://french.stackexchange.com/questions/177/is-there-a-non-religious-definition-of-catholique
adjectifs - Is there a “non-religious” definition of “catholique”? - French Language Stack Exchange Join French Language By clicking “Sign up”, you agree to our terms of service and acknowledge you have read our privacy policy. Sign up with Google OR Email Password Sign up Already have an account? Log in Skip to main content Stack Exchange Network Stack Exchange network consists of 183 Q&A communities including Stack Overflow, the largest, most trusted online community for developers to learn, share their knowledge, and build their careers. Visit Stack Exchange Loading… Tour Start here for a quick overview of the site Help Center Detailed answers to any questions you might have Meta Discuss the workings and policies of this site About Us Learn more about Stack Overflow the company, and our products current community French Language helpchat French Language Meta your communities Sign up or log in to customize your list. more stack exchange communities company blog Log in Sign up Home Questions Unanswered AI Assist Labs Tags Chat Users Teams Ask questions, find answers and collaborate at work with Stack Overflow for Teams. Try Teams for freeExplore Teams 3. Teams 4. Ask questions, find answers and collaborate at work with Stack Overflow for Teams. Explore Teams Teams Q&A for work Connect and share knowledge within a single location that is structured and easy to search. Learn more about Teams Hang on, you can't upvote just yet. You'll need to complete a few actions and gain 15 reputation points before being able to upvote. Upvoting indicates when questions and answers are useful. What's reputation and how do I get it? Instead, you can save this post to reference later. Save this post for later Not now Thanks for your vote! You now have 5 free votes weekly. Free votes count toward the total vote score does not give reputation to the author Continue to help good content that is interesting, well-researched, and useful, rise to the top! To gain full voting privileges, earn reputation. Got it!Go to help center to learn more Is there a “non-religious” definition of “catholique”? Ask Question Asked 14 years, 1 month ago Modified10 years, 2 months ago Viewed 394 times This question shows research effort; it is useful and clear 13 Save this question. Show activity on this post. In one of my favorite songs by Gilbert Becaud, “L'homme et la Musique” there is a passage I don't understand: Breve, nous ne sommes, Pas des amants catholiques. I'm confused by the world “catholique”. It's usually a religious reference, but it doesn't seem to be so in this case. Is there another common meaning that I'm missing? adjectifs sens religion Share Share a link to this question Copy linkCC BY-SA 3.0 Improve this question Follow Follow this question to receive notifications edited Jul 17, 2015 at 9:12 Yohann V. 3,856 17 17 silver badges 29 29 bronze badges asked Aug 18, 2011 at 13:42 Tom AuTom Au 3,243 3 3 gold badges 27 27 silver badges 46 46 bronze badges 1 2 Attention que, d'après paroles.zouker.com/gilbert-becaud/… ou frmusique.ru/texts/b/becaud_gilbert/hommeetlamusique.htm ce n'est pas Breve, nous ne sommes mais Bref, nous ne sommes. Qui a raison ?...Istao –Istao 2012-02-27 06:42:28 +00:00 Commented Feb 27, 2012 at 6:42 Add a comment| 2 Answers 2 Sorted by: Reset to default This answer is useful 20 Save this answer. Show activity on this post. Sometimes catholique is used to refer to something that is "the norm", what all people do usually. "Ce gâteau ne me semble pas très catholique" means that you find the cake does not fit the standards for being called a cake: because it is weird looking, or tasting strange or whatever. This meaning comes from the history of France to have a majority of Catholics since a long time. (I guess) Share Share a link to this answer Copy linkCC BY-SA 3.0 Improve this answer Follow Follow this answer to receive notifications answered Aug 18, 2011 at 13:56 M'vyM'vy 7,610 2 2 gold badges 34 34 silver badges 54 54 bronze badges 7 1 OK, so it means, "We're not the USUAL kind of lovers." That makes sense in this context. Another word I might use in this context is "orthodox."Tom Au –Tom Au 2011-08-18 13:58:10 +00:00 Commented Aug 18, 2011 at 13:58 1 "orthodoxe" is also used the same way, and I think that's the same in English; basically it means "strict" in this context. "Plus catholique que le pape" = as strict as one can be Joubarc –Joubarc 2011-08-18 13:59:17 +00:00 Commented Aug 18, 2011 at 13:59 3 @Joubarc: regarding "orthodox[e]": "strict" is the very definition of the word... and the reason it was used for the eponymous branch of christianity... (to be fair, "catholic"/"catholique" also does literally mean "universal", but I doubt it is the reason why the word is used to mean "in the norm")Dave –Dave 2011-08-18 15:17:08 +00:00 Commented Aug 18, 2011 at 15:17 3 This answer is misleading without mentioning that it's mostly used to refer to moral norm. In this case, it isn't about "the usual kind of lovers" but rather "lovers who do naughty things." It's a reference to their sex life, not simply about how unorthodox they are.Borror0 –Borror0 2011-08-18 16:00:58 +00:00 Commented Aug 18, 2011 at 16:00 1 A mildly equivalent English word could be 'kosher' (at least for the cake, not sure about the lovers!)Benjol –Benjol 2011-08-22 12:16:21 +00:00 Commented Aug 22, 2011 at 12:16 |Show 2 more comments This answer is useful 8 Save this answer. Show activity on this post. Catholique can, by association with the church by that name, often mean "moral" or "correct", hence the common expression Ce n'est pas catholique! Thus the lyrics of this song probably mean that they are "immoral" (i.e., not chaste) lovers. Share Share a link to this answer Copy linkCC BY-SA 3.0 Improve this answer Follow Follow this answer to receive notifications answered Aug 18, 2011 at 13:57 Brennan VincentBrennan Vincent 4,202 3 3 gold badges 29 29 silver badges 44 44 bronze badges 3 Except that they were "L'Homme, et La Musique," Therefore, "abnormal" is fine in this context, "unchaste" (or impudique) is not.Tom Au –Tom Au 2011-08-18 21:36:43 +00:00 Commented Aug 18, 2011 at 21:36 3 The sexual connotation of pas catholique does not exist everywhere. We had a short chat about this. It seems to be obvious in Quebec and to exist in some regions of France but not all. I (mostly Parisian) had never heard of it until now.Gilles 'SO nous est hostile' –Gilles 'SO nous est hostile' 2011-08-19 14:28:36 +00:00 Commented Aug 19, 2011 at 14:28 Ce n'est pas très catholique! plutôt (sans très c'est ambigu)Knu –Knu 2011-09-24 01:16:05 +00:00 Commented Sep 24, 2011 at 1:16 Add a comment| Your Answer Thanks for contributing an answer to French Language Stack Exchange! Please be sure to answer the question. Provide details and share your research! But avoid … Asking for help, clarification, or responding to other answers. Making statements based on opinion; back them up with references or personal experience. To learn more, see our tips on writing great answers. Draft saved Draft discarded Sign up or log in Sign up using Google Sign up using Email and Password Submit Post as a guest Name Email Required, but never shown Post Your Answer Discard By clicking “Post Your Answer”, you agree to our terms of service and acknowledge you have read our privacy policy. Start asking to get answers Find the answer to your question by asking. Ask question Explore related questions adjectifs sens religion See similar questions with these tags. Featured on Meta Introducing a new proactive anti-spam measure Spevacus has joined us as a Community Manager stackoverflow.ai - rebuilt for attribution Community Asks Sprint Announcement - September 2025 Related 3Definition of "ceci fait que" 4Help with one definition of "déterminer" found in an old French dictionary 4Substantif de négliger non péjoratif 0What ways are there of creating adjectives / adjective-clauses in French? 3Différence entre cotisations « non payées » et « impayées » ? 1"Mais non" vs "pas question" vs "bien sûr que non" 0How can I understand this sentence with "ce que ça ____"? 1Signification de « non-endroit » 2Pourquoi l'adjectif se rapportant aux glandes est « glandulaire » et non « glandaire » ? Hot Network Questions Are there any world leaders who are/were good at chess? how do I remove a item from the applications menu Why are LDS temple garments secret? How different is Roman Latin? Who is the target audience of Netanyahu's speech at the United Nations? With with auto-generated local variables Is there a way to defend from Spot kick? Where is the first repetition in the cumulative hierarchy up to elementary equivalence? Matthew 24:5 Many will come in my name! Is it safe to route top layer traces under header pins, SMD IC? Passengers on a flight vote on the destination, "It's democracy!" In Dwarf Fortress, why can't I farm any crops? Exchange a file in a zip file quickly What is the feature between the Attendant Call and Ground Call push buttons on a B737 overhead panel? What meal can come next? Interpret G-code Childhood book with a girl obsessessed with homonyms who adopts a stray dog but gives it back to its owners I have a lot of PTO to take, which will make the deadline impossible Clinical-tone story about Earth making people violent Does the Mishna or Gemara ever explicitly mention the second day of Shavuot? Do we declare the codomain of a function from the beginning, or do we determine it after defining the domain and operations? Do sum of natural numbers and sum of their squares represent uniquely the summands? Can peaty/boggy/wet/soggy/marshy ground be solid enough to support several tonnes of foot traffic per minute but NOT support a road? Can a cleric gain the intended benefit from the Extra Spell feat? Question feed Subscribe to RSS Question feed To subscribe to this RSS feed, copy and paste this URL into your RSS reader. Why are you flagging this comment? It contains harassment, bigotry or abuse. This comment attacks a person or group. Learn more in our Code of Conduct. It's unfriendly or unkind. This comment is rude or condescending. Learn more in our Code of Conduct. Not needed. This comment is not relevant to the post. Enter at least 6 characters Something else. A problem not listed above. Try to be as specific as possible. Enter at least 6 characters Flag comment Cancel You have 0 flags left today French Language Tour Help Chat Contact Feedback Company Stack Overflow Teams Advertising Talent About Press Legal Privacy Policy Terms of Service Your Privacy Choices Cookie Policy Stack Exchange Network Technology Culture & recreation Life & arts Science Professional Business API Data Blog Facebook Twitter LinkedIn Instagram Site design / logo © 2025 Stack Exchange Inc; user contributions licensed under CC BY-SA. rev 2025.9.26.34547 By clicking “Accept all cookies”, you agree Stack Exchange can store cookies on your device and disclose information in accordance with our Cookie Policy. Accept all cookies Necessary cookies only Customize settings Cookie Consent Preference Center When you visit any of our websites, it may store or retrieve information on your browser, mostly in the form of cookies. This information might be about you, your preferences, or your device and is mostly used to make the site work as you expect it to. The information does not usually directly identify you, but it can give you a more personalized experience. Because we respect your right to privacy, you can choose not to allow some types of cookies. Click on the different category headings to find out more and manage your preferences. Please note, blocking some types of cookies may impact your experience of the site and the services we are able to offer. Cookie Policy Accept all cookies Manage Consent Preferences Strictly Necessary Cookies Always Active These cookies are necessary for the website to function and cannot be switched off in our systems. They are usually only set in response to actions made by you which amount to a request for services, such as setting your privacy preferences, logging in or filling in forms. You can set your browser to block or alert you about these cookies, but some parts of the site will not then work. These cookies do not store any personally identifiable information. Cookies Details‎ Performance Cookies [x] Performance Cookies These cookies allow us to count visits and traffic sources so we can measure and improve the performance of our site. They help us to know which pages are the most and least popular and see how visitors move around the site. All information these cookies collect is aggregated and therefore anonymous. If you do not allow these cookies we will not know when you have visited our site, and will not be able to monitor its performance. Cookies Details‎ Functional Cookies [x] Functional Cookies These cookies enable the website to provide enhanced functionality and personalisation. They may be set by us or by third party providers whose services we have added to our pages. If you do not allow these cookies then some or all of these services may not function properly. Cookies Details‎ Targeting Cookies [x] Targeting Cookies These cookies are used to make advertising messages more relevant to you and may be set through our site by us or by our advertising partners. They may be used to build a profile of your interests and show you relevant advertising on our site or on other sites. They do not store directly personal information, but are based on uniquely identifying your browser and internet device. Cookies Details‎ Cookie List Clear [x] checkbox label label Apply Cancel Consent Leg.Interest [x] checkbox label label [x] checkbox label label [x] checkbox label label Necessary cookies only Confirm my choices
189350
https://en.wikipedia.org/wiki/Contour_line
Jump to content Contour line العربية Asturianu Azərbaycanca বাংলা Башҡортса Беларуская Беларуская (тарашкевіца) Български Català Čeština Deutsch Eesti Español Esperanto Euskara فارسی Français Gaeilge Galego 한국어 Հայերեն हिन्दी Hrvatski Ido Bahasa Indonesia ქართული Қазақша Kiswahili Kreyòl ayisyen Кыргызча Latviešu Lietuvių मराठी Bahasa Melayu မြန်မာဘာသာ Nederlands 日本語 Norsk bokmål Norsk nynorsk Occitan Oʻzbekcha / ўзбекча Polski Português Romnă Русский Shqip සිංහල Simple English Slovenčina Slovenščina Српски / srpski Svenska தமிழ் Türkçe Українська Tiếng Việt 吴语 粵語 中文 Edit links From Wikipedia, the free encyclopedia Curve along which a 3-D surface is at equal elevation This article is about lines of equal value in maps and diagrams. For more meanings of the word "contour", see Contour (disambiguation). A contour line (also isoline, isopleth, isoquant or isarithm) of a function of two variables is a curve along which the function has a constant value, so that the curve joins points of equal value. It is a plane section of the three-dimensional graph of the function parallel to the -plane. More generally, a contour line for a function of two variables is a curve connecting points where the function has the same particular value. In cartography, a contour line (often just called a "contour") joins points of equal elevation (height) above a given level, such as mean sea level. A contour map is a map illustrated with contour lines, for example a topographic map, which thus shows valleys and hills, and the steepness or gentleness of slopes. The contour interval of a contour map is the difference in elevation between successive contour lines. The gradient of the function is always perpendicular to the contour lines. When the lines are close together the magnitude of the gradient is large: the variation is steep. A level set is a generalization of a contour line for functions of any number of variables. Contour lines are curved, straight or a mixture of both lines on a map describing the intersection of a real or hypothetical surface with one or more horizontal planes. The configuration of these contours allows map readers to infer the relative gradient of a parameter and estimate that parameter at specific places. Contour lines may be either traced on a visible three-dimensional model of the surface, as when a photogrammetrist viewing a stereo-model plots elevation contours, or interpolated from the estimated surface elevations, as when a computer program threads contours through a network of observation points of area centroids. In the latter case, the method of interpolation affects the reliability of individual isolines and their portrayal of slope, pits and peaks. History [edit] The idea of lines that join points of equal value was rediscovered several times. The oldest known isobath (contour line of constant depth) is found on a map dated 1584 of the river Spaarne, near Haarlem, by Dutchman Pieter Bruinsz. In 1701, Edmond Halley used such lines (isogons) on a chart of magnetic variation. The Dutch engineer Nicholas Cruquius drew the bed of the river Merwede with lines of equal depth (isobaths) at intervals of 1 fathom in 1727, and Philippe Buache used them at 10-fathom intervals on a chart of the English Channel that was prepared in 1737 and published in 1752. Such lines were used to describe a land surface (contour lines) in a map of the Duchy of Modena and Reggio by Domenico Vandelli in 1746, and they were studied theoretically by Ducarla in 1771, and Charles Hutton used them in the Schiehallion experiment. In 1791, a map of France by J. L. Dupain-Triel used contour lines at 20-metre intervals, hachures, spot-heights and a vertical section. In 1801, the chief of the French Corps of Engineers, Haxo, used contour lines at the larger scale of 1:500 on a plan of his projects for Rocca d'Anfo, now in northern Italy, under Napoleon. By around 1843, when the Ordnance Survey started to regularly record contour lines in Great Britain and Ireland, they were already in general use in European countries. Isobaths were not routinely used on nautical charts until those of Russia from 1834, and those of Britain from 1838. As different uses of the technique were invented independently, cartographers began to recognize a common theme, and debated what to call these "lines of equal value" generally. The word isogram (from Ancient Greek ἴσος (isos) 'equal' and γράμμα (gramma) 'writing, drawing') was proposed by Francis Galton in 1889 for lines indicating equality of some physical condition or quantity, though isogram can also refer to a word without a repeated letter. As late as 1944, John K. Wright still preferred isogram, but it never attained wide usage. During the early 20th century, isopleth (πλῆθος, plethos, 'amount') was being used by 1911 in the United States, while isarithm (ἀριθμός, arithmos, 'number') had become common in Europe. Additional alternatives, including the Greek-English hybrid isoline and isometric line (μέτρον, metron, 'measure'), also emerged. Despite attempts to select a single standard, all of these alternatives have survived to the present. When maps with contour lines became common, the idea spread to other applications. Perhaps the latest to develop are air quality and noise pollution contour maps, which first appeared in the United States in approximately 1970, largely as a result of national legislation requiring spatial delineation of these parameters. Types [edit] Contour lines are often given specific names beginning with "iso-" according to the nature of the variable being mapped, although in many usages the phrase "contour line" is most commonly used. Specific names are most common in meteorology, where multiple maps with different variables may be viewed simultaneously. The prefix "'iso-" can be replaced with "isallo-" to specify a contour line connecting points where a variable changes at the same rate during a given time period. An isogon (from Ancient Greek γωνία (gonia) 'angle') is a contour line for a variable which measures direction. In meteorology and in geomagnetics, the term isogon has specific meanings which are described below. An isocline (κλίνειν, klinein, 'to lean or slope') is a line joining points with equal slope. In population dynamics and in geomagnetics, the terms isocline and isoclinic line have specific meanings which are described below. Equidistant points [edit] A curve of equidistant points is a set of points all at the same distance from a given point, line, or polyline. In this case the function whose value is being held constant along a contour line is a distance function. Isopleths [edit] In 1944, John K. Wright proposed that the term isopleth be used for contour lines that depict a variable which cannot be measured at a point, but which instead must be calculated from data collected over an area, as opposed to isometric lines for variables that could be measured at a point; this distinction has since been followed generally. An example of an isopleth is population density, which can be calculated by dividing the population of a census district by the surface area of that district. Each calculated value is presumed to be the value of the variable at the centre of the area, and isopleths can then be drawn by a process of interpolation. The idea of an isopleth map can be compared with that of a choropleth map. In meteorology, the word isopleth is used for any type of contour line. Meteorology [edit] Meteorological contour lines are based on interpolation of the point data received from weather stations and weather satellites. Weather stations are seldom exactly positioned at a contour line (when they are, this indicates a measurement precisely equal to the value of the contour). Instead, lines are drawn to best approximate the locations of exact values, based on the scattered information points available. Meteorological contour maps may present collected data such as actual air pressure at a given time, or generalized data such as average pressure over a period of time, or forecast data such as predicted air pressure at some point in the future. Thermodynamic diagrams use multiple overlapping contour sets (including isobars and isotherms) to present a picture of the major thermodynamic factors in a weather system. Barometric pressure [edit] An isobar (from Ancient Greek βάρος (baros) 'weight') is a line of equal or constant pressure on a graph, plot, or map; an isopleth or contour line of pressure. More accurately, isobars are lines drawn on a map joining places of equal average atmospheric pressure reduced to sea level for a specified period of time. In meteorology, the barometric pressures shown are reduced to sea level, not the surface pressures at the map locations. The distribution of isobars is closely related to the magnitude and direction of the wind field, and can be used to predict future weather patterns. Isobars are commonly used in television weather reporting. Isallobars are lines joining points of equal pressure change during a specific time interval. These can be divided into anallobars, lines joining points of equal pressure increase during a specific time interval, and katallobars, lines joining points of equal pressure decrease. In general, weather systems move along an axis joining high and low isallobaric centers. Isallobaric gradients are important components of the wind as they increase or decrease the geostrophic wind. An isopycnal is a line of constant density. An isoheight or isohypse is a line of constant geopotential height on a constant pressure surface chart. Isohypse and isoheight are simply known as lines showing equal pressure on a map. Temperature and related subjects [edit] An isotherm (from Ancient Greek θέρμη (thermē) 'heat') is a line that connects points on a map that have the same temperature. Therefore, all points through which an isotherm passes have the same or equal temperatures at the time indicated. An isotherm at 0 °C is called the freezing level. The term lignes isothermes (or lignes d'égale chaleur) was coined by the Prussian geographer and naturalist Alexander von Humboldt, who as part of his research into the geographical distribution of plants published the first map of isotherms in Paris, in 1817. According to Thomas Hankins, the Scottish engineer William Playfair's graphical developments greatly influenced Alexander von Humbolt's invention of the isotherm. Humbolt later used his visualizations and analyses to contradict theories by Kant and other Enlightenment thinkers that non-Europeans were inferior due to their climate. An isocheim is a line of equal mean winter temperature, and an isothere is a line of equal mean summer temperature. An isohel (ἥλιος, helios, 'Sun') is a line of equal or constant solar radiation. An isogeotherm is a line of equal temperature beneath the Earth's surface. Rainfall and air moisture [edit] An isohyet or isohyetal line (from Ancient Greek ὑετός (huetos) 'rain') is a line on a map joining points of equal rainfall in a given period. A map with isohyets is called an isohyetal map. An isohume is a line of constant relative humidity, while an isodrosotherm (from Ancient Greek δρόσος (drosos) 'dew' and θέρμη (therme) 'heat') is a line of equal or constant dew point. An isoneph is a line indicating equal cloud cover. An isochalaz is a line of constant frequency of hail storms, and an isobront is a line drawn through geographical points at which a given phase of thunderstorm activity occurred simultaneously. Snow cover is frequently shown as a contour-line map. Wind [edit] An isotach (from Ancient Greek ταχύς (tachus) 'fast') is a line joining points with constant wind speed. In meteorology, the term isogon refers to a line of constant wind direction. Freeze and thaw [edit] An isopectic line denotes equal dates of ice formation each winter, and an isotac denotes equal dates of thawing. Physical geography and oceanography [edit] Elevation and depth [edit] Contours are one of several common methods used to denote elevation or altitude and depth on maps. From these contours, a sense of the general terrain can be determined. They are used at a variety of scales, from large-scale engineering drawings and architectural plans, through topographic maps and bathymetric charts, up to continental-scale maps. "Contour line" is the most common usage in cartography, but isobath for underwater depths on bathymetric maps and isohypse for elevations are also used. In cartography, the contour interval is the elevation difference between adjacent contour lines. The contour interval should be the same over a single map. When calculated as a ratio against the map scale, a sense of the hilliness of the terrain can be derived. Interpretation [edit] There are several rules to note when interpreting terrain contour lines: The rule of Vs: sharp-pointed vees usually are in stream valleys, with the drainage channel passing through the point of the vee, with the vee pointing upstream. This is a consequence of erosion. The rule of Os: closed loops are normally uphill on the inside and downhill on the outside, and the innermost loop is the highest area. If a loop instead represents a depression, some maps note this by short lines called hachures which are perpendicular to the contour and point in the direction of the low. (The concept is similar to but distinct from hachures used in hachure maps.) Spacing of contours: close contours indicate a steep slope; distant contours a shallow slope. Two or more contour lines merging indicates a cliff. By counting the number of contours that cross a segment of a stream, the stream gradient can be approximated. Of course, to determine differences in elevation between two points, the contour interval, or distance in altitude between two adjacent contour lines, must be known, and this is normally stated in the map key. Usually contour intervals are consistent throughout a map, but there are exceptions. Sometimes intermediate contours are present in flatter areas; these can be dashed or dotted lines at half the noted contour interval. When contours are used with hypsometric tints on a small-scale map that includes mountains and flatter low-lying areas, it is common to have smaller intervals at lower elevations so that detail is shown in all areas. Conversely, for an island which consists of a plateau surrounded by steep cliffs, it is possible to use smaller intervals as the height increases. Electrostatics [edit] An isopotential map is a measure of electrostatic potential in space, often depicted in two dimensions with the electrostatic charges inducing that electric potential. The term equipotential line or isopotential line refers to a curve of constant electric potential. Whether crossing an equipotential line represents ascending or descending the potential is inferred from the labels on the charges. In three dimensions, equipotential surfaces may be depicted with a two dimensional cross-section, showing equipotential lines at the intersection of the surfaces and the cross-section.[citation needed] The general mathematical term level set is often used to describe the full collection of points having a particular potential, especially in higher dimensional space.[citation needed] Magnetism [edit] In the study of the Earth's magnetic field, the term isogon or isogonic line refers to a line of constant magnetic declination, the variation of magnetic north from geographic north. An agonic line is drawn through points of zero magnetic declination. An isoporic line refers to a line of constant annual variation of magnetic declination . An isoclinic line connects points of equal magnetic dip, and an aclinic line is the isoclinic line of magnetic dip zero. An isodynamic line (from δύναμις or dynamis meaning 'power') connects points with the same intensity of magnetic force. Oceanography [edit] Besides ocean depth, oceanographers use contour to describe diffuse variable phenomena much as meteorologists do with atmospheric phenomena. In particular, isobathytherms are lines showing depths of water with equal temperature, isohalines show lines of equal ocean salinity, and isopycnals are surfaces of equal water density. Geology [edit] Various geological data are rendered as contour maps in structural geology, sedimentology, stratigraphy and economic geology. Contour maps are used to show the below ground surface of geologic strata, fault surfaces (especially low angle thrust faults) and unconformities. Isopach maps use isopachs (lines of equal thickness) to illustrate variations in thickness of geologic units. Environmental science [edit] In discussing pollution, density maps can be very useful in indicating sources and areas of greatest contamination. Contour maps are especially useful for diffuse forms or scales of pollution. Acid precipitation is indicated on maps with isoplats. Some of the most widespread applications of environmental science contour maps involve mapping of environmental noise (where lines of equal sound pressure level are denoted isobels), air pollution, soil contamination, thermal pollution and groundwater contamination. By contour planting and contour ploughing, the rate of water runoff and thus soil erosion can be substantially reduced; this is especially important in riparian zones. Ecology [edit] An isoflor is an isopleth contour connecting areas of comparable biological diversity. Usually, the variable is the number of species of a given genus or family that occurs in a region. Isoflor maps are thus used to show distribution patterns and trends such as centres of diversity. Social sciences [edit] In economics, contour lines can be used to describe features which vary quantitatively over space. An isochrone shows lines of equivalent drive time or travel time to a given location and is used in the generation of isochrone maps. An isotim shows equivalent transport costs from the source of a raw material, and an isodapane shows equivalent cost of travel time. Contour lines are also used to display non-geographic information in economics. Indifference curves (as shown at left) are used to show bundles of goods to which a person would assign equal utility. An isoquant (in the image at right) is a curve of equal production quantity for alternative combinations of input usages, and an isocost curve (also in the image at right) shows alternative usages having equal production costs. In political science an analogous method is used in understanding coalitions (for example the diagram in Laver and Shepsle's work). In population dynamics, an isocline shows the set of population sizes at which the rate of change, or partial derivative, for one population in a pair of interacting populations is zero. Statistics [edit] In statistics, isodensity lines or isodensanes are lines that join points with the same value of a probability density. Isodensanes are used to display bivariate distributions. For example, for a bivariate elliptical distribution the isodensity lines are ellipses. Thermodynamics, engineering, and other sciences [edit] Various types of graphs in thermodynamics, engineering, and other sciences use isobars (constant pressure), isotherms (constant temperature), isochors (constant specific volume), or other types of isolines, even though these graphs are usually not related to maps. Such isolines are useful for representing more than two dimensions (or quantities) on two-dimensional graphs. Common examples in thermodynamics are some types of phase diagrams. Isoclines are used to solve ordinary differential equations. In interpreting radar images, an isodop is a line of equal Doppler velocity, and an isoecho is a line of equal radar reflectivity. In the case of hybrid contours, energies of hybrid orbitals and the energies of pure atomic orbitals are plotted. The graph obtained is called hybrid contour. Other phenomena [edit] isochasm: aurora equal occurrence isochor: volume isodose: absorbed dose of radiation isophene: biological events occurring with coincidence such as plants flowering isophote: illuminance mobile telephony: mobile received power and cell coverage area Algorithms [edit] finding boundaries of level sets after image segmentation Edge detection Level-set method Boundary tracing Active contour model Graphical design [edit] For features specific to topography, see Terrain cartography § Contour lines, and Topographic map § Conventions. To maximize readability of contour maps, there are several design choices available to the map creator, principally line weight, line color, line type and method of numerical marking. Line weight is simply the darkness or thickness of the line used. This choice is made based upon the least intrusive form of contours that enable the reader to decipher the background information in the map itself. If there is little or no content on the base map, the contour lines may be drawn with relatively heavy thickness. Also, for many forms of contours such as topographic maps, it is common to vary the line weight and/or color, so that a different line characteristic occurs for certain numerical values. For example, in the topographic map above, the even hundred foot elevations are shown in a different weight from the twenty foot intervals. Line color is the choice of any number of pigments that suit the display. Sometimes a sheen or gloss is used as well as color to set the contour lines apart from the base map. Line colour can be varied to show other information. Line type refers to whether the basic contour line is solid, dashed, dotted or broken in some other pattern to create the desired effect. Dotted or dashed lines are often used when the underlying base map conveys very important (or difficult to read) information. Broken line types are used when the location of the contour line is inferred. Numerical marking is the manner of denoting the arithmetical values of contour lines. This can be done by placing numbers along some of the contour lines, typically using interpolation for intervening lines. Alternatively a map key can be produced associating the contours with their values. If the contour lines are not numerically labeled and adjacent lines have the same style (with the same weight, color and type), then the direction of the gradient cannot be determined from the contour lines alone. However, if the contour lines cycle through three or more styles, then the direction of the gradient can be determined from the lines. The orientation of the numerical text labels is often used to indicate the direction of the slope. Plan view versus profile view [edit] See also: Topographic profile Most commonly contour lines are drawn in plan view, or as an observer in space would view the Earth's surface: ordinary map form. However, some parameters can often be displayed in profile view showing a vertical profile of the parameter mapped. Some of the most common parameters mapped in profile are air pollutant concentrations and sound levels. In each of those cases it may be important to analyze (air pollutant concentrations or sound levels) at varying heights so as to determine the air quality or noise health effects on people at different elevations, for example, living on different floor levels of an urban apartment. In actuality, both plan and profile view contour maps are used in air pollution and noise pollution studies. Labeling contour maps [edit] Labels are a critical component of elevation maps. A properly labeled contour map helps the reader to quickly interpret the shape of the terrain. If numbers are placed close to each other, it means that the terrain is steep. Labels should be placed along a slightly curved line "pointing" to the summit or nadir, from several directions if possible, making the visual identification of the summit or nadir easy. Contour labels can be oriented so a reader is facing uphill when reading the label. Manual labeling of contour maps is a time-consuming process, however, there are a few software systems that can do the job automatically and in accordance with cartographic conventions, called automatic label placement. See also [edit] Aeronautical chart Bathymetry Dymaxion map Fall line (topography) Geological map Marching squares Planform Tensor field TERCOM References [edit] ^ Courant, Richard, Herbert Robbins, and Ian Stewart. What Is Mathematics?: An Elementary Approach to Ideas and Methods. New York: Oxford University Press, 1996. p. 344. ^ Jump up to: a b c Hughes-Hallett, Deborah; McCallum, William G.; Gleason, Andrew M. (2013). Calculus : Single and Multivariable (6 ed.). John wiley. ISBN 978-0470-88861-2. ^ "Definition of contour line". Dictionary.com. Retrieved 2022-04-04. ^ "Definition of CONTOUR MAP". Merriam-Webster. Retrieved 2022-04-04. ^ Tracy, John C. Plane Surveying; A Text-Book and Pocket Manual. New York: J. Wiley & Sons, 1907. p. 337. ^ Davis, John C., 1986, Statistics and data analysis in geology, Wiley ISBN 0-471-08079-9 ^ Morato-Moreno, Manuel (2017). "Orígenes de la representación topográfica del terreno en algunos mapas hispanoamericanos del s. XVI". Boletín de la Asociación de Geógrafos Españoles. doi:10.21138/bage.2414. ^ Thrower, N. J. W. Maps and Civilization: Cartography in Culture and Society, University of Chicago Press, 1972, revised 1996, page 97; and Jardine, Lisa Ingenious Pursuits: Building the Scientific Revolution, Little, Brown, and Company, 1999, page 31. ^ Jump up to: a b R. A. Skelton, "Cartography", History of Technology, Oxford, vol. 6, pp. 612–614, 1958. ^ Colonel Berthaut, La Carte de France, vol. 1, p. 139, quoted by Close. ^ C. Hutton, "An account of the calculations made from the survey and measures taken at Schehallien, in order to ascertain the mean density of the Earth", Philosophical Transactions of the Royal Society of London, vol. 68, pp. 756–757 ^ C. Close, The Early Years of the Ordnance Survey, 1926, republished by David and Charles, 1969, ISBN 0-7153-4477-3, pp. 141–144. ^ T. Owen and E. Pilbeam, Ordnance Survey: Map Makers to Britain since 1791, HMSO, 1992, ISBN 0-11-701507-5. ^ Galton, Francis (1889). "On the Principle and Methods of Assigning Marks for Bodily Efficiency". Nature. 40 (1044): 651. Bibcode:1889Natur..40..649.. doi:10.1038/040649a0. S2CID 3996216. ^ Wright, John K. (Apr 1930). "Isopleth as a Generic Term". Geographical Review. 20 (2): 341. JSTOR 208890. ^ Jump up to: a b Wright, John K. (Oct 1944). "The Terminology of Certain Map Symbols". Geographical Review. 34 (4): 653–654. Bibcode:1944GeoRv..34..653W. doi:10.2307/210035. JSTOR 210035. ^ Robinson AH (1971). "The genealogy of the isopleth". Cartographic Journal. 8 (1): 49–53. Bibcode:1971CartJ...8...49R. doi:10.1179/caj.1971.8.1.49. ^ T. Slocum, R. McMaster, F. Kessler, and H. Howard, Thematic Cartography and Geographic Visualization, 2nd edition, Pearson, 2005, ISBN 0-13-035123-7, p. 272. ^ ArcGIS, Isopleth: Contours, 2013. ^ NOAA's National Weather Service, Glossary. ^ Edward J. Hopkins, Ph.D. (1996-06-10). "Surface Weather Analysis Chart". University of Wisconsin. Retrieved 2007-05-10. ^ World Meteorological Organisation. "Isallobar". Eumetcal. Archived from the original on 16 April 2014. Retrieved 12 April 2014. ^ World Meteorological Organisation. "Anallobar". Eumetcal. Archived from the original on 24 September 2015. Retrieved 12 April 2014. ^ World Meteorological Organisation. "Katallobar". Eumetcal. Archived from the original on 5 February 2008. Retrieved 12 April 2014. ^ "Forecasting weather system movement with pressure tendency". Chapter 13 – Weather Forecasting. Lyndon State College Atmospheric Sciences. Retrieved 12 April 2014. ^ DataStreme Atmosphere (2008-04-28). "Air Temperature Patterns". American Meteorological Society. Archived from the original on 2008-05-11. Retrieved 2010-02-07. ^ Daum, Andreas W. (2024). Alexander von Humboldt: A Concise Biography. Trans. Robert Savage. Princeton, N.J.: Princeton University Press. pp. 106–107. ISBN 978-0-691-24736-6. ^ Munzar, Jan (1967-09-01). "Alexander Von Humboldt and His Isotherms". Weather. 22 (9): 360–363. Bibcode:1967Wthr...22..360M. doi:10.1002/j.1477-8696.1967.tb02989.x. ISSN 1477-8696. ^ "Blood, Dirt, and Nomograms: A Particular History of Graphs". Isis. 90 (1): 50–80. 1999. doi:10.1086/384241. ISSN 0021-1753. ^ Strobl, Michael (2021). "Alexander von Humbolt's Climatological Writings". German Life and Letters. 74 (3): 371–393. doi:10.1111/glal.12313. ISSN 0016-8777. ^ Leveson, David J. (2002). "Depression Contours – Getting Into and Out of a Hole". City University of New York. ^ Sark (Sercq), D Survey, Ministry of Defence, Series M 824, Sheet Sark, Edition 4 GSGS, 1965, OCLC OCLC 27636277. Scale 1:10,560. Contour intervals: 50 feet up to 200, 20 feet from 200 to 300, and 10 feet above 300. ^ "isoporic line". historicalcharts.noaa.gov. 1946. Archived from the original on 2015-07-21. Retrieved 2015-07-20. ^ "Isobel". www.sfu.ca. 2005-01-05. Retrieved 2010-04-25. ^ Specht, Raymond. Heathlands and related shrublands: Analytical studies. Elsevier. pp. 219–220. ^ Laver, Michael and Kenneth A. Shepsle (1996) Making and breaking governments pictures. ^ Fernández, Antonio (2011). "A Generalized Regression Methodology for Bivariate Heteroscedastic Data" (PDF). Communications in Statistics – Theory and Methods. 40 (4): 598–621. doi:10.1080/03610920903444011. S2CID 55887263. ^ Imhof, E., "Die Anordnung der Namen in der Karte," Annuaire International de Cartographie II, Orell-Füssli Verlag, Zürich, 93–129, 1962. ^ Freeman, H., "Computer Name Placement," ch. 29, in Geographical Information Systems, 1, D.J. Maguire, M.F. Goodchild, and D.W. Rhind, John Wiley, New York, 1991, 449–460. External links [edit] Forthright's Phrontistery | Authority control databases | | --- | | International | | | National | Germany United States Israel | | Other | Yale LUX | Retrieved from " Categories: Cartography Curves Multivariable calculus Topography Relief maps Hidden categories: Articles with short description Short description is different from Wikidata Articles containing Ancient Greek (to 1453)-language text All articles with unsourced statements Articles with unsourced statements from June 2025 Commons category link from Wikidata
189351
https://www.visualslope.com/Library/FDM-for-heat-transfering.pdf
Two-Dimensional Conduction: Finite-Difference Equations and Solutions Chapter 4 Sections 4.4 and 4.5 Numerical methods • analytical solutions that allow for the determination of the exact temperature distribution are only available for limited ideal cases. • graphical solutions have been used to gain an insight into complex heat transfer problems, where analytical solutions are not available, but they have limited accuracy and are primarily used for two-dimensional problems. • advances in numerical computing now allow for complex heat transfer problems to be solved rapidly on computers, i.e., "numerical techniques“. • current numerical techniques include: finite-difference analysis; finite element analysis (FEA); and finite-volume analysis. • in general, these techniques are routinely used to solve problems in heat transfer, fluid dynamics, stress analysis, electrostatics and magnetics, etc. • We will show the use of finite-difference analysis to solve conduction heat transfer problems. Finite-difference Analysis • numerical techniques result in an approximate solution, however the error can be made very small. • properties (e.g., temperature) are determined at discrete points in the region of interest-these are referred to as nodal points or nodes. Consider the finite-difference technique for 2-D conduction heat transfer: • in this case each node represents the temperature of a point on the surface being considered. • the temperature at the node represents the average temperature of that region of the surface. • algebraic expressions are used to define the relationship between adjacent nodes on the surface –usually the boundary conditions are specified. • by increasing the number of nodes on the surface being considered it is possible to increase the spatial resolution of the solution and to potentially increase the accuracy of the numerical solution, however this increases the number of calculation is required to obtain a solution to the problem. Finite-Difference Approximation The Nodal Network and Finite-Difference Approximation • The nodal network identifies discrete points at which the temperature is to be determined and uses an m,n notation to designate their location. What is represented by the temperature determined at a nodal point, as for example, Tm,n? How is the accuracy of the solution affected by construction of the nodal network? What are the trade-offs between selection of a fine or a coarse mesh? Finite-Difference Method The Finite-Difference Method Procedure: • Represent the physical system by a nodal network i.e., discretization of problem. • Use the energy balance method to obtain a finite-difference equation for each node of unknown temperature. • Solve the resulting set of algebraic equations for the unknown nodal temperatures. • Use the temperature field and Fourier’s Law to determine the heat transfer in the medium Finite difference formulation of the differential equation • numerical methods are used for solving differential equations, i.e., the DE is replaced by algebraic equations • in the finite difference method, derivatives are replaced by differences, i.e., • this is based on the premise that a reasonably accurate result can be obtained by replacing differential quantities by sufficiently small differences, letting ( ) ( ) ( ) d f x f x x f x dx x + Δ − ≈ Δ 0 ( ) ( ) lim x d f x f x dx x Δ → Δ = Δ x x Δ f Δ ( ) f x x + Δ ( ) f x ( ) f x x x + Δ x Tangent line Finite-Difference Approximation Finite-Difference Formulation of Differential Equation For example: Consider the 1-D steady-state heat conduction equation with internal heat generation) i.e., For a point m,n we approximate the first derivatives at points m-½Δx and m+ ½Δx as 2 2 0 T q k x ∂ + = ∂  x Δ Finite-Difference Formulation of Differential Equation example: 1-D steady-state heat conduction equation with internal heat generation For a point m we approximate the 2nd derivative as Now the finite-difference approximation of the heat conduction equation is This is repeated for all the modes in the region considered 1 1 2 2 1 1 2 2 1 1 2 2 d T d T m m m m dx dx m m m m m m T T T T T x x x x x T T T x + − − + + − − − − − ∂ Δ Δ ≈ ≈ Δ Δ ∂ − + ≈ Δ 1 1 2 2 0 m m m m T T T q k x + − − + + = Δ  Finite-Difference Formulation of Differential Equation If this was a 2-D problem we could also construct a similar relationship in the both the x and Y-direction at a point (m,n) i.e., Now the finite-difference approximation of the 2-D heat conduction equation is Once again this is repeated for all the modes in the region considered. We could also derive a similar equation for the 3-D case ( ) 2 , 1 , , 1 2 2 , 2 m n m n m n m n T T T T y y − + − + ∂ ≈ ∂ Δ ( ) 2 1, , 1, 2 2 , 2 m n m n m n m n T T T T x x + − − + ∂ ≈ ∂ Δ ( ) ( ) 1, , 1, , 1 , , 1 2 2 2 2 0 m n m n m n m n m n m n T T T T T T q k x y + − − + − + − + + + = Δ Δ  Finite-Difference Formulation of Differential Equation If Δx =Δy, then the finite-difference approximation of the 2-D heat conduction equation is which can be reduced to and the relationship reduces to if there is no internal heat generation, Which is just the average of the surrounding node’s temperatures! ( ) 2 1, , 1, , 1 , , 1 2 2 0 m n m n m n m n m n m n q x T T T T T T k − + − + Δ − + + − + + =  ( ) 2 1, 1, , 1 , 1 , 4 0 m n m n m n m n m n q x T T T T T k − + − + Δ + + + − + =  ( ) 2 , 1, 1, , 1 , 1 1 4 m n m n m n m n m n q x T T T T T k − + − + ⎡ ⎤ Δ ⎢ ⎥ = + + + + ⎢ ⎥ ⎣ ⎦  , 1, 1, , 1 , 1 1 4 m n m n m n m n m n T T T T T − + − + ⎡ ⎤ = + + + ⎣ ⎦ Consider this simple case Consider this simple case 100 100 100 50 200 50 200 50 200 300 300 300 100 100 100 50 200 50 200 50 200 300 300 300 100 100 100 50 200 50 200 50 200 300 300 300 [ ] 1 3 2 1 100 50 4 T T T = + + + [ ] 2 1 4 1 100 200 4 T T T = + + + [ ] 3 1 4 1 300 50 4 T T T = + + + [ ] 4 3 2 1 300 200 4 T T T = + + + What if the boundary conditions are different Energy Balance Method Derivation of the Finite-Difference Equations - The Energy Balance Method -• As a convenience that eliminates the need to predetermine the direction of heat flow, assume all heat flows are into the nodal region of interest, and express all heat rates accordingly. Hence, the energy balance becomes: 0 in g E E + = i i (4.30) • Consider application to an interior nodal point (one that exchanges heat by conduction with four, equidistant nodal points): ( ) 4 ( ) ( , ) 1 0 i m n i q q x y → = + Δ ⋅Δ ⋅ = ∑ i A where, for example, ( ) ( ) ( ) 1, , 1, , m n m n m n m n T T q k y x − − → − = Δ ⋅ Δ A Is it possible for all heat flows to be into the m,n nodal region? What feature of the analysis insures a correct form of the energy balance equation despite the assumption of conditions that are not realizable? (4.31) • A summary of finite-difference equations for common nodal regions is provided in Table 4.2. Energy Balance Method (cont.) Consider an external corner with convection heat transfer. ( ) ( ) ( ) ( ) ( ) ( ) 1, , , 1 , , 0 m n m n m n m n m n q q q − → − → ∞→ + + = ( ) ( ) 1, , , 1 , , , 2 2 x y h h 0 2 2 m n m n m n m n m n m n T T T T y x k k x y T T T T − − ∞ ∞ − − Δ Δ ⎛ ⎞ ⎛ ⎞ ⋅ + ⋅ ⎜ ⎟ ⎜ ⎟ Δ Δ ⎝ ⎠ ⎝ ⎠ Δ Δ ⎛ ⎞ ⎛ ⎞ + ⋅ − + ⋅ − = ⎜ ⎟ ⎜ ⎟ ⎝ ⎠ ⎝ ⎠ A A A A or, with , x y Δ = Δ 1, , 1 , h h 2 2 1 0 m n m n m n x x T T T T k k − − ∞ Δ Δ ⎛ ⎞ + + − + = ⎜ ⎟ ⎝ ⎠ (4.43) e_04_02 e 04 02 03 e 04 02 02 p_04_45 fig_04_06 p_04_44 e_04_04_02 Energy Balance Method (cont.) • Note potential utility of using thermal resistance concepts to express rate equations. E.g., conduction between adjoining dissimilar materials with an interfacial contact resistance. ( ) ( ) , 1 , , 1 , m n m n m n m n tot T T q R − − → − = ( ) ( ) , / 2 / 2 t c tot A B R y y R k x x k x ′′ Δ Δ = + + Δ ⋅ Δ ⋅ Δ ⋅ A A A (4.46) p_04_38 Solution Methods Solutions Methods • Matrix Inversion: Expression of system of N finite-difference equations for N unknown nodal temperatures as: [ ][ ] [ ] A T C = (4.48) Coefficient Matrix (NxN) Solution Vector (T1,T2, …TN) Right-hand Side Vector of Constants (C1,C2…CN) Solution [ ] [ ] [ ] 1 T A C − = Inverse of Coefficient Matrix (4.49) • Gauss-Seidel Iteration: Each finite-difference equation is written in explicit form, such that its unknown nodal temperature appears alone on the left-hand side: ( ) ( ) 1 ( 1) 1 1 i N ij ij k k k i i j j j j i ii ii ii a a C T T T a a a − − = = + = − − ∑ ∑ (4.51) where i =1, 2,…, N and k is the level of iteration. Iteration proceeds until satisfactory convergence is achieved for all nodes: ( ) ( ) 1 k k i i T T ε − − ≤ • What measures may be taken to insure that the results of a finite-difference solution provide an accurate prediction of the temperature field? Problem: Finite-Difference Equations Problem 4.39: Finite-difference equations for (a) nodal point on a diagonal surface and (b) tip of a cutting tool. (a) Diagonal surface (b) Cutting tool. Schematic: ASSUMPTIONS: (1) Steady-state, 2-D conduction, (2) Constant properties Problem: Finite-Difference Equations (cont.) ANALYSIS: (a) The control volume about node m,n is triangular with sides Δx and Δy and diagonal (surface) of length 2 Δx. The heat rates associated with the control volume are due to conduction, q1 and q2, and to convection, qc. An energy balance for a unit depth normal to the page yields in E 0 =  1 2 c q q q 0 + + = ( ) ( ) ( )( ) m,n-1 m,n m+1,n m,n m,n T T T T k x 1 k y 1 h 2 x 1 T T 0. y x ∞ − − Δ ⋅ + Δ ⋅ + Δ ⋅ − = Δ Δ With Δx = Δy, it follows that m,n-1 m+1,n m,n h x h x T T 2 T 2 2 T 0. k k ∞ Δ Δ ⎡ ⎤ + + ⋅ − + ⋅ = ⎢ ⎥ ⎣ ⎦ (b) The control volume about node m,n is triangular with sides Δx/2 and Δy/2 and a lower diagonal surface of length ( ) 2 x/2 . Δ The heat rates associated with the control volume are due to the uniform heat flux, qa, conduction, qb, and convection qc. An energy balance for a unit depth yields in a b c E =0 q q q 0 + + =  ( ) m+1,n m,n o m,n T T x y x q 1 k 1 h 2 T T 0. 2 2 x 2 ∞ − Δ Δ Δ ⎡ ⎤ ⎡ ⎤ ⎡ ⎤ ′′ ⋅ ⋅ + ⋅ ⋅ + ⋅ ⋅ − = ⎢ ⎥ ⎢ ⎥ ⎢ ⎥ Δ ⎣ ⎦ ⎣ ⎦ ⎣ ⎦ or, with Δx = Δy, m+1,n o m,n h x x h x T 2 T q 1 2 T 0. k k k ∞ Δ Δ Δ ⎛ ⎞ ′′ + ⋅ ⋅ + ⋅ − + ⋅ = ⎜ ⎟ ⎝ ⎠ Problem: Cold Plate Problem 4.76: Analysis of cold plate used to thermally control IBM multi-chip, thermal conduction module. Features: • Heat dissipated in the chips is transferred by conduction through spring-loaded aluminum pistons to an aluminum cold plate. • Nominal operating conditions may be assumed to provide a uniformly distributed heat flux of at the base of the cold plate. 5 2 10 W/m o q′′ = • Heat is transferred from the cold plate by water flowing through channels in the cold plate. Find: (a) Cold plate temperature distribution for the prescribed conditions. (b) Options for operating at larger power levels while remaining within a maximum cold plate temperature of 40°C. Problem: Cold Plate (cont.) Schematic: ASSUMPTIONS: (1) Steady-state conditions, (2) Two-dimensional conduction, (3) Constant properties. Problem: Cold Plate (cont.) ANALYSIS: Finite-difference equations must be obtained for each of the 28 nodes. Applying the energy balance method to regions 1 and 5, which are similar, it follows that Node 1: ( ) ( ) ( ) ( ) 2 6 1 0 y x T x y T y x x y T ⎡ ⎤ Δ Δ + Δ Δ − Δ Δ + Δ Δ = ⎣ ⎦ Node 5: ( ) ( ) ( ) ( ) 4 10 5 0 y x T x y T y x x y T ⎡ ⎤ Δ Δ + Δ Δ − Δ Δ + Δ Δ = ⎣ ⎦ Nodal regions 2, 3 and 4 are similar, and the energy balance method yields a finite-difference equation of the form Nodes 2,3,4: ( )( ) ( ) ( ) ( ) 1, 1, , 1 , 2 2 0 m n m n m n m n y x T T x y T y x x y T − + − ⎡ ⎤ Δ Δ + + Δ Δ − Δ Δ + Δ Δ = ⎣ ⎦ Energy balances applied to the remaining combinations of similar nodes yield the following finite-difference equations. Nodes 6, 14: ( ) ( ) ( ) ( ) ( ) [ ] ( ) 1 7 6 x y T y x T x y y x h x k T h x k T∞ Δ Δ + Δ Δ − Δ Δ + Δ Δ + Δ = − Δ ( ) ( ) ( ) ( ) ( ) [ ] ( ) 19 15 14 x y T y x T x y y x h x k T h x k T∞ Δ Δ + Δ Δ − Δ Δ + Δ Δ + Δ = − Δ Nodes 7, 15: ( )( ) ( ) ( ) ( ) ( ) [ ] ( ) 6 8 2 7 2 2 2 y x T T x y T y x x y h x k T h x k T∞ Δ Δ + + Δ Δ − Δ Δ + Δ Δ + Δ = − Δ ( )( ) ( ) ( ) ( ) ( ) [ ] ( ) 14 16 20 15 2 2 2 y x T T x y T y x x y h x k T h x k T∞ Δ Δ + + Δ Δ − Δ Δ + Δ Δ + Δ = − Δ Problem: Cold Plate (cont.) Nodes 8, 16: ( ) ( ) ( ) ( ) [ ( ) ( ) 7 9 11 3 2 2 3 3 y x T y x T x y T x y T y x x y Δ Δ + Δ Δ + Δ Δ + Δ Δ − Δ Δ + Δ Δ ( )( )] ( )( ) 8 h k x y T h k x y T∞ + Δ + Δ = − Δ + Δ ( ) ( ) ( ) ( ) [ ( ) ( ) 15 17 11 21 2 2 3 3 y x T y x T x y T x y T y x x y Δ Δ + Δ Δ + Δ Δ + Δ Δ − Δ Δ + Δ Δ ( )( )] ( )( ) 16 h k x y T h k x y T∞ + Δ + Δ = − Δ + Δ Node 11: ( ) ( ) ( ) ( ) ( ) ( ) [ ] ( ) 8 16 12 11 x y T x y T 2 y x T 2 x y y x h y k T 2h y k T∞ Δ Δ + Δ Δ + Δ Δ − Δ Δ + Δ Δ + Δ = − Δ Nodes 9, 12, 17, 20, 21, 22: ( ) ( ) ( ) ( ) ( ) ( ) [ ] 1, 1, , 1 , 1 , 2 0 m n m n m n m n m n y x T y x T x y T x y T x y y x T − + + − Δ Δ + Δ Δ + Δ Δ + Δ Δ − Δ Δ + Δ Δ = Nodes 10, 13, 18, 23: ( ) ( ) ( ) ( ) ( ) [ ] 1, 1, 1, , 2 2 0 n m n m m n m n x y T x y T y x T x y y x T + − − Δ Δ + Δ Δ + Δ Δ − Δ Δ + Δ Δ = Node 19: ( ) ( ) ( ) ( ) ( ) [ ] 14 24 20 19 2 2 0 x y T x y T y x T x y y x T Δ Δ + Δ Δ + Δ Δ − Δ Δ + Δ Δ = Nodes 24, 28: ( ) ( ) ( ) ( ) [ ] ( ) 19 25 24 o x y T y x T x y y x T q x k ′′ Δ Δ + Δ Δ − Δ Δ + Δ Δ = − Δ ( ) ( ) ( ) ( ) [ ] ( ) 23 27 28 o x y T y x T x y y x T q x k ′′ Δ Δ + Δ Δ − Δ Δ + Δ Δ = − Δ Nodes 25, 26, 27: ( ) ( ) ( ) ( ) ( ) [ ] ( ) 1, 1, , 1 , 2 2 2 m n m n m n m n o y x T y x T x y T x y y x T q x k − + + ′′ Δ Δ + Δ Δ + Δ Δ − Δ Δ + Δ Δ = − Δ Problem: Cold Plate (cont.) Evaluating the coefficients and solving the equations simultaneously, the steady-state temperature distribution (°C), tabulated according to the node locations, is: 23.77 23.91 24.27 24.61 24.74 23.41 23.62 24.31 24.89 25.07 25.70 26.18 26.33 28.90 28.76 28.26 28.32 28.35 30.72 30.67 30.57 30.53 30.52 32.77 32.74 32.69 32.66 32.65 (b) For the prescribed conditions, the maximum allowable temperature (T24 = 40°C) is reached when o q′′ = 1.407 × 105 W/m2 (14.07 W/cm2). Options for extending this limit could include use of a copper cold plate (k ≈ 400 W/m⋅K) and/or increasing the convection coefficient associated with the coolant. With k = 400 W/m⋅K, a value of o q′′ = 17.37 W/cm2 may be maintained. . With k = 400 W/m⋅K and h = 10,000 W/m2⋅K (a practical upper limit), o q′′ = 28.65 W/cm2. Additional, albeit small, improvements may be realized by relocating the coolant channels closer to the base of the cold plate.
189352
https://keysight.zinfi.net/concierge/OEMs/keysight/wwwcontent/Attachments/pdf/powerappnote/5990-8888.pdf
Page 1 Find us at www.keysight.com APPLICATION NOTE 10 Tips to Enhance DC Power Testing and Analysis Introduction A DC power supply is an integral part of any good test system. The capability to deliver clean and accurate power to your Device Under Test (DUT) removes doubts and ensures you get the right results. Our practical tips will enable this and let you get more out of your power supply. If you ever need to get a new power supply, you can count on these tips to help you choose the right one. Remember, more power and features do not mean a better power supply. It is about how you use your power supply. Don’t worry about your power supply. Let us do that for you. We want you to focus on what’s important to you. We hope you enjoy our tips. Page 2 Find us at www.keysight.com Contents 1. Program Your Power Supply Correctly to Operate in Constant Voltage or Constant Current Mode........................................................................................................ 3 2. Use Remote Sense to Regulate Voltage at Your Load.............................................. 5 3. Use Your Power Supply to Measure DUT Current.................................................... 8 4. Connect Power Supply Outputs in Series or Parallel for More Power...................... 10 5. Minimize Noise from Your Power Supply to Your DUT............................................ 12 6. Safeguard Your DUT Using Built-in Power Supply Protection Features. ................... 14 7. Use Output Relays to Physically Disconnect Your DUT. .......................................... 16 8. Capture Dynamic Waveforms Using a Power Supply’s Built-in Digitizer................... 18 9. Create Time-Varying Voltages Using Power Supply List Mode................................ 20 10. Control Instruments, Automate Tests, and Perform Analysis with Software. ............. 23 Bonus Tip: Rack-Mounting Your Power Supply............................................................ 25 Resources. ................................................................................................................. 26 Page 3 Find us at www.keysight.com 1. Program Your Power Supply Correctly to Operate in Constant Voltage or Constant Current Mode In most circumstances, the output of a power supply operates in either constant voltage (CV) or constant current (CC) mode, depending on the voltage setting, current limit setting, and load resistance. However, some unusual circumstances will cause the power supply to enter unregulated (UNR) mode. Understanding these three modes will make it easier to program your power supply correctly. Constant voltage A power supply will operate in CV mode, provided the load does not require more current than the current limit setting. Based on Ohm’s law, V = I x R, maintaining a constant voltage while changing the load resistance requires the current to increase or decrease. As long as the current draw Iout = Vs / RL is less than the current limit setting, the power supply regulates the output at the voltage setting. In Figure 1, the power supply will operate on the horizontal line Vs with Iout = Vs / RL. CV operating line CC operating line V out V s V s I s Is Iout RL RC > RL RC < RL RL RC VS I S Load resistance Critical (crossover) resistance Voltage setting Current setting RC = = = = = = / Figure 1. Power supply output characteristic Constant current If the load resistance decreases, such as when a device under test (DUT) component fails, and the load resistance, RL, is less than RC, where RC is the ratio of the power supply voltage setting to the current limit setting, the power supply will regulate the current instead. Again, Ohm’s law dictates a change in voltage if the current stays constant at the current limit setting. This is the CC mode. In Figure 1, the power supply will operate on the vertical line IS with Vout = IS x RL. Page 4 Find us at www.keysight.com Unregulated state If a power supply cannot regulate its output voltage or output current, the output will become unregulated and indicate UNR mode. Neither the voltage nor the current will be at the corresponding set point, and the values at which they settle are unpredictable. While UNR mode can occur for various reasons, it is uncommon. Possible causes of UNR include the following: • The power supply has an internal fault. • The AC input line voltage is below the specified range. • The load resistance is RC, the value at which the output will cross over from CV to CC or CC to CV (see Figure 1). • Another power source is connected across your power supply output, such as when you set up outputs in parallel. • The output is transitioning from CV to CC or CC to CV. This transition can cause a momentary UNR. Page 5 Find us at www.keysight.com 2. Use Remote Sense to Regulate Voltage at Your Load Ideally, lead connections from your power supply to your load have no resistance. In reality, lead resistance increases with lead length and wire gauge. As a result, the voltage at the load may decrease when a supply delivers current through the wire. To compensate, use remote sensing to correct for these voltage drops. Typically, a power supply ships from the factory with the sense leads connected locally at the output terminals. However, for setups with long load leads or complex setups with relays and connectors, the voltage at the output terminals will not accurately represent the voltage at the load (Figure 2). + + + S --------+ -----S + OUT OUT 5.0 V 0.015 Ω lead resistance Power supply programmed for 5 V Load + + S -----+ -----S + OUT OUT 5.3 V 0.015 Ω lead resistance 0.015 Ω lead resistance Power supply programmed for 5 V Load 4.7 V + ---5.0 V 10A 10 A + OUT and - OUT load leads are 6 feet, 14 AWG each + OUT and - OUT load leads are 6 feet, 14 AWG each 0.015 Ω lead resistance Figure 2. The effects of 6 feet of 14 AWG leads with sense leads connected to the output terminals. A 0.3 V drop develops over the leads (0.15 V per lead). Depending on the gauge and length of the wire, the resistivity of your load connections could cause a much lower voltage at your load than you want. High-current situations, for example, will invariably lead to significant voltage drops even with short load leads. Consider the resistances of different gauges of copper wire in Table 1. Page 6 Find us at www.keysight.com Table 1. Resistance in mΩ per foot for different wire gauges Wire gauge (AWG) Resistance (mΩ / ft) 22 16.1 20 10.2 18 6.39 16 4.02 14 2.53 12 1.59 10 0.999 As a general rule, for every three-gauge increase in your copper wire, the resistance doubles. Since you must select the gauge wire to satisfy the current requirements of the load, remote sense at the load will improve voltage regulation without shortening lead length or decreasing wire gauge. When you connect the remote sense terminals to the load, the internal feedback amplifier sees the voltage directly at the load rather than at the output terminals. Since the control loop senses the voltage directly at the load, the supply will keep the load voltage constant, regardless of voltage drops caused by load lead gauge, load lead length, output relays, or connectors. Remember the following when you use remote sense: • Use a two-wire shielded twisted-pair cable for your sense leads. • Connect the sense lead cable’s shield to ground on only one end of the cable. • Do not twist or bundle sense leads together with load leads. • Prevent an open circuit at the sense terminals, which are part of the output’s feedback path. • Keysight uses internal sense protect resistors. These resistors prevent the output voltage from rising more than a few percentage points if the sense leads inadvertently open. • Most supplies can compensate only for a maximum load lead drop of a few volts. Page 7 Find us at www.keysight.com To implement remote sense (Figure 3), disconnect the sense terminal connections from the main outputs. Then connect each sense terminal to the proper polarity load contact. Finally, set the power supply to remote sense mode or four-wire mode, if necessary. + + + S --------+ -----S + OUT OUT 5.0 V 0.015 Ω lead resistance Power supply programmed for 5 V Load + + S -----+ -----S + OUT OUT 5.3 V 0.015 Ω lead resistance 0.015 Ω lead resistance Power supply programmed for 5 V Load 4.7 V + ---5.0 V 10A 10 A + OUT and - OUT load leads are 6 feet, 14 AWG each + OUT and - OUT load leads are 6 feet, 14 AWG each 0.015 Ω lead resistance Figure 3. Using remote sense to compensate for load lead voltage drop Page 8 Find us at www.keysight.com 3. Use Your Power Supply to Measure DUT Current You can obtain an accurate DUT current measurement with an ammeter, a current shunt, or the built-in readback on your power supply. Ultimately, you should select a method after considering the advantages and disadvantages of each. Often, the current readback on your power supply can provide the measurement accuracy you need. Ammeter A common way to measure DUT current is to use a bench digital multimeter (DMM) set in ammeter mode. While an ammeter has the benefit of a specified accuracy, you must break the circuit to insert the ammeter. A DMM also limits the maximum current you can measure, typically several amps. External current shunt / DMM You can also make current measurements with shunts. With a current shunt, you can conveniently select the most appropriate shunt resistor to match your current range. Your accuracy is based on the DMM’s voltage measurement accuracy and the precision of the shunt. While this method can produce highly accurate results, certain errors can adversely affect your measurements. Commonly overlooked complications include thermal electromotive force, which occurs when dissimilar metals cause thermocouple voltages to develop; shunt miscalibration; and self-heating effects, which occur when higher temperature from current flow causes shunt resistance to change. In addition to these concerns, installing a current shunt requires breaking your circuit to connect the shunt in series. A current shunt installed in a rack-mount system may require complex connections involving relays and switches. Built-in current readback You can avoid the difficulties involved with connecting current shunts by using a power supply’s built-in readback. Current readback on a power supply uses an internal shunt selected to complement the output rating of the supply. You do not need to disconnect the DUT or connect a DMM. Page 9 Find us at www.keysight.com Consider the level of measurement accuracy you can expect with a high-quality power supply (Table 2). Table 2. Relative accuracy of power supply current readback Output current level Typical accuracy 100% of rated output 0.1% to 0.5% 10% of rated output 0.5% to 1% 1% of rated output Near 10% Power supply measurement specifications account for the errors that affect an external shunt. Therefore, your power supply readback may already be accurate enough for most current measurement applications, particularly for currents between 10% and 100% of the rated output current of the supply. Built-in current readback offers many benefits including the following: • Reduction in connection equipment – no need for relays, switching, and wiring • Simplicity of use • Power supply provides readings directly in amps • Circuit disconnects not required • Specified accuracy – accuracy values already account for shunt errors • Synchronized measurements – readback measurements can be triggered to start with other power-related events Page 10 Find us at www.keysight.com 4. Connect Power Supply Outputs in Series or Parallel for More Power You can connect two or more power supply outputs in series to get more voltage or connect outputs in parallel to get more current. When you connect outputs in series for higher voltage, observe the following precautions: • Never exceed the floating voltage rating (output terminal isolation) of any of the outputs. • Never subject any of the power supply outputs to a reverse voltage. • Only connect outputs that have identical voltage and current ratings in series. Set each power supply output independently so that the voltages sum to the total desired value. To do this, first set each output to the maximum desired current limit the load can safely handle. Next, set the voltage of each output to sum to the total desired voltage. For example, if you use two outputs, set each to half the desired voltage. If you use three outputs, set each to one-third the desired voltage. When you connect outputs in parallel for higher current, you should observe certain precautions: • One output must operate in constant voltage (CV) mode and the other(s) in constant current (CC) mode. • The output load must draw enough current to keep the CC output(s) in CC mode. • Connect only outputs with identical voltage and current ratings in parallel. Set the current limit of all outputs equally such that they sum to the total desired current limit value. Set the voltage of the CV output to a value slightly lower than the voltage value of the CC outputs. The CC outputs supply the output current to which they have been set and drop their output voltage until they match the voltage of the CV unit, which supplies only enough current to fulfill the total load demand. Using remote sense with series connections When using remote sense in a series configuration, wire the remote sense terminals on each output in series and connect them to the load, as shown in Figure 4. Page 11 Find us at www.keysight.com Power +S supply output Power supply output + --Load Power supply output Power supply output + --Load +S +S +S −OUT + OUT + OUT + OUT + OUT −OUT −OUT −OUT −S −S −S −S Figure 4. Series connection with remote sense Using remote sense with parallel connections When you use remote sense in a parallel configuration, wire the remote sense terminals on each output in parallel and connect them to the load, as shown in Figure 5. Power +S supply output Power supply output + --Load Power supply output Power supply output + --Load +S +S +S −OUT + OUT + OUT + OUT + OUT −OUT −OUT −OUT −S −S −S −S Figure 5. Parallel connection with remote sense To simplify the settings for parallel outputs, some power supplies support an advanced feature called “output grouping.” Up to four identical outputs can be “grouped,” enabling you to control all grouped outputs as if they were a single, higher-current output. Page 12 Find us at www.keysight.com 5. Minimize Noise from Your Power Supply to Your DUT If your DUT is sensitive to noise on its DC power input, you will want to do everything possible to minimize noise on the input. Here are three simple steps you can take. Choose a power supply that has low noise To minimize noise, start at your source. Since filtering noise from your power supply can be difficult, you want to select a power supply that has very low noise to begin with. Choosing a linearly regulated power supply can accomplish this; however, linear power supplies can be large and can generate large amounts of heat. Instead, consider choosing a switching-regulated power supply. Switch-mode power supply technology has improved to the point where the noise on the output compares to that of a linear supply. Table 3 compares noise on a typical linear supply with a performance switching supply. Table 3. Comparison of power supply noise for linearly regulated vs. switching-regulated supplies RMS noise Peak-to-peak noise Linearly regulated power supply ~ 500 μV ~ 4 mV Switching-regulated power supply ~ 750 μV ~ 5 mV Selecting a supply with low RMS and peak-to-peak output voltage noise specifications is an excellent start, but you can also minimize the noise with proper attention to the lead connections to your DUT. Shield supply-to-DUT connections The connections between your supply and DUT can be susceptible to noise pickup. Types of interference include inductive coupling, capacitive coupling, and radio-frequency interference. You can reduce noise in various ways, but the most effective is to ensure that your load and sense connections use shielded two-wire cables. When you use a shielded cable, make sure to connect the shield to earth ground at only one end. For example, connect the shield on the power supply end to earth ground, as shown in Figure 6. Neglecting to connect the shield on either end can increase capacitive pickup. Page 13 Find us at www.keysight.com +S --S +OUT --OUT Shield Shield DC input Power supply + --DUT +S --S Shield Earth ground 2 Earth ground 1 Ground loop current flows in shield ∆Vground DC input Power supply + --DUT Figure 6. Shield connects to earth ground on only one end of the cable Do not connect the shield to ground at both ends because ground loop currents may occur. Figure 7 shows a ground loop current that developed because of the difference in potential between the supply ground and the DUT ground. The ground loop current can produce voltage on the cabling that appears as noise to your DUT. In addition to proper shielding, balancing your cable impedance can preserve the power supply’s low noise profile. +S --S +OUT --OUT Shield Shield DC input Power supply + --DUT +S --S Shield Earth ground 2 Earth ground 1 Ground loop current flows in shield ∆Vground DC input Power supply + --DUT Figure 7. Shield connected improperly (at both ends) results in ground loop current Balance output-to-ground impedance Common-mode noise is generated when common-mode current flows from inside a power supply to earth ground and produces voltage on impedances to ground, including cable impedance. To minimize the effect of common-mode current, equalize the impedance to ground from the plus and minus output terminals on the power supply. You should also equalize the impedance from the DUT’s plus and minus input terminals to ground. Use a common-mode choke in series with the output leads and a shunt capacitor from each lead to ground to accomplish this task. Page 14 Find us at www.keysight.com 6. Safeguard Your DUT Using Built-in Power Supply Protection Features Most DC power supplies have features that protect sensitive DUTs and circuitry from exposure to potentially damaging voltage or current. When the DUT trips a protection circuit in the power supply, the protection circuit turns off the output and displays a notification. Two common protection features are overvoltage protection (OVP) and overcurrent protection (OCP). When designing your test, it is important to understand these protection features to protect your DUT. Overvoltage protection OVP is a value set in volts designed to protect your DUT from excessive voltage. When the power supply output voltage exceeds your OVP setting, the protection will trip and turn off the output. OVP is always enabled. When manufacturers ship power supplies from the factory, OVP is typically set well above the maximum rated output of the power supply. Set your OVP trip voltage low enough to protect your DUT from excessive voltage but high enough to prevent nuisance tripping from ordinary fluctuations in the output voltage. Fluctuations can occur during output transient conditions, such as load current changes. OVP circuits can respond to an overvoltage condition in microseconds, but the output voltage itself will take longer to go down. The amount of time it takes for the output to go down depends on the down-programming capabilities of the power supply and the load connected to the output. Some power supplies have a silicon-controlled rectifier (SCR) across the output that fires when the OVP trips, which brings the voltage down much faster. Overcurrent protection Most power supplies have an output voltage setting and a current limit setting. The current limit setting determines the value in amps at which the power supply will prevent excessive current from flowing. This constant current mode regulates the output current at the current limit but will not turn off the output. Instead, the voltage decreases below the voltage setting, and the power supply continues to produce current at the current limit setting in CC mode. CAUTION: On most power supplies, OVP responds to the voltage at the output terminals, not the sense terminals. When using remote sense, program your OVP trip voltage high enough to account for load lead voltage drops. Page 15 Find us at www.keysight.com OCP shuts off the output to prevent excessive current flow to the DUT. When you enable OCP, if the supply enters CC mode, a protection will trip and turn off the output. In effect, OCP turns the current limit setting into a trip value in amps. Set your current limit low enough to protect your DUT from excessive current but high enough to prevent nuisance tripping caused by regular fluctuations in the output current that can occur during output transient conditions, such as during an output voltage change. When a power supply ships from the factory, OCP is turned off. Figure 8. Power supply front panel showing overvoltage protection, overcurrent protection, constant voltage mode, and constant current mode Page 16 Find us at www.keysight.com 7. Use Output Relays to Physically Disconnect Your DUT Although you may expect your power supply output to be completely open when you set an “output off” state, that may not be the case. When set to “off,” the output impedance will vary from model to model and may depend upon the options installed in the power supply. The output-off state will typically set the output voltage and output current to zero and disable the internal power-generating circuitry. However, these settings do not guarantee that no current will flow into or out of your DUT, as would be the case if you physically disconnected the output terminals from your DUT. A power supply output that is off but not completely open can adversely affect your DUT test for a number of reasons: • Your DUT contains a DC power source that connects directly across the power supply output. • Your DUT contains a DC power source that connects across the output in a reverse-polarity configuration. • Your DUT is sensitive to extra capacitive loading. • Your DUT produces a changing voltage across the power supply output. Some power supply models have an internal output relay option that can completely disconnect the power supply output from your DUT. The relay in Figure 9 opens when you use an output-off setting and stops all current flow to the DUT. But even with a relay option installed, certain models may still have output capacitors or capacitively coupled networks connected from the output terminals to chassis ground because of the location of the relays. Therefore, your DUT will still connect to these components (see Figure 10). Reverse protection diode Inside power supply RFI/ESD filters Internal output relays Output capacitor + + ----DUT Inside power supply + Figure 9. An example of a power supply with internal relays located right at the output terminals. With the relays open, your DUT is completely disconnected. Page 17 Find us at www.keysight.com Reverse protection diode RFI/ESD filters Internal output relays Output capacitor Reverse protection diode Inside power supply RFI/ESD filters Internal output relays Output capacitor + + ----DUT Figure 10. An example of a power supply with internal relays located inboard of some output components. With the relays open, these components remain connected to your DUT. In critical applications that require a complete disconnect between the power supply output and DUT, check with your power supply vendor to see if an output relay option exists that will provide a complete disconnect. If this configuration is not available, you may have to provide external output disconnect relays. The downsides of an external relay configuration are the added cost and complexity to your test setup and the extra space required. You will need to provide the relays, connect wires from the power supply output to the relays, and install a means to control the relays. You may also find it more difficult to synchronize the opening and closing of the external relays with other power-related events. Built-in output disconnect relays provide advantages over external relays. They take up less space and are not as complex, with less wiring and no external relay control circuitry. They offer better built-in synchronization of relay open / close with other power-related events. And the relays open upon fault conditions such as overvoltage and overcurrent. Page 18 Find us at www.keysight.com 8. Capture Dynamic Waveforms Using a Power Supply’s Built-in Digitizer While most power supplies can measure DUT steady-state voltage and current, some can also measure dynamic voltage and current. These supplies feature a built-in digitizer. Traditionally, engineers use digitizers to capture and store analog signals for data acquisition. Like an oscilloscope, which uses a digitizer to display the analog signal present on one of its inputs, a power supply’s built-in digitizer captures the dynamic voltage and current waveforms produced on its output. Basic digitizer operation Figure 11 shows a digitizer converting an analog waveform into a set of data points. Upon a trigger, the digitizer takes measurement samples and stores them in a buffer. Trigger occurs Measurement sample (point) Time interval between samples Acquisition time = Time interval x (number of samples --1) Figure 11. A digitizer converts an analog waveform into data points by sampling When you make a digitizing measurement, you can set two of the following three parameters: • Time interval: time between samples • Number of samples: total number of samples you want to take • Acquisition time: total time during which you want to take samples When two parameters are set, the following equation will determine the remaining parameter: Acquisition time = time interval x (number of samples - 1) Similarly, you can configure a power supply’s built-in digitizer to trigger and capture power supply output voltage or current waveforms. The supply’s digitizer will store a buffer of readings with the waveform data points. You can retrieve the data and use Page 19 Find us at www.keysight.com any standard software for analysis. You also can use a customized program or available device characterization software to easily visualize the results in the time domain (oscilloscope-like view or data-logger view) or perform statistical analysis. An example digitizer application If you use your power supply in place of a battery, you can capture dynamic information about the current flowing into your DUT, allowing you to better understand the current drain on your DUT batteries. Consequently, you can make appropriate design adjustments to optimize power management during the DUT’s modes of operation. Figure 12 shows a sample waveform obtained on a cell phone’s current draw using a power supply output digitizer and device characterization software (this is not an oscilloscope display). Figure 12. Device characterization software uses a power supply’s built-in digitizer to capture data showing a cell phone’s current draw from the power supply. Device characterization software displays the captured data graphically in the time domain, much like an oscilloscope displays a signal. The idle, receive, and transmit current states are discernible from the waveform. Of course, you can analyze digitized data in ways other than using device characterization software. You can use a bus interface such as USB, LAN, or GPIB to capture and retrieve digitized waveform information. Retrieved data can be returned either as a scalar value, with the power supply calculating a single number averaged from the data (as it does for the front panel display) or as an array of values. You can even acquire pre- and post-trigger data by changing the trigger offset to capture waveforms such as peak current draw during a DC inrush current test. Page 20 Find us at www.keysight.com 9. Create Time-Varying Voltages Using Power Supply List Mode Typically, engineers use power supplies to bias circuits requiring constant voltage. However, more advanced applications may require a time-varying voltage (or current). Modern power supplies can easily manage both using list mode to address the time-varying applications. What is list mode? Normally, you can program a PC to change the voltages on a power supply output for discrete periods. In this way, your program controls the transitions between voltages to allow you to test your DUT at different voltages. List mode lets you generate these voltage sequences and synchronize them to internal or external signals without tying up the computer. You set individually programmed steps of voltage (or current) and an associated step duration. After setting the duration for each step, you trigger the list to execute directly on the power supply. You may set the power supply to move on to the next step based on dwell times or triggers. You can program a list to repeat once or multiple times (see Figure 13). Trigger Step # 0 1 2 3 4 5 Dwell time Iteration 1 0 1 2 3 4 5 Iteration 2 Repeat count = 2 Figure 13. A list is a sequence of individually programmed voltage (or current) steps initiated with a trigger To create a list, set the following: • One or more voltage or current steps: defined voltage or current values • Dwell times: duration associated with each voltage or current step • Repeat count: the number of times you want the list to repeat Page 21 Find us at www.keysight.com Two uses of list mode for testing The list mode on a power supply can effectively run two types of tests, voltage sequence test and voltage waveform test. In a voltage sequence test, you take measurements while the DUT is exposed to discrete stimulus voltage values. In a voltage waveform test, you take measurements while the DUT is exposed to a stimulus voltage waveform. In both cases, the stimulus involves creating a sequence of voltage steps. The first has multiple levels of steady-state voltages, and the second has a continuously varying voltage profile. Engineers commonly use two tests for DUT design verification. Be aware that DC power supplies have limited bandwidth and typically generate voltage waveforms only at frequencies up to tens of kilohertz. Also, most power supplies are unipolar devices that create only positive voltages. Using list mode You can use list mode to perform a voltage waveform test on automotive electronic systems. During the startup of an internal combustion engine, also known as a cold crank, battery voltage levels drop considerably as the electric starter motor draws enormous amounts of current (see Figure 14). Once the engine turns, the battery voltage plateaus and hits a final level as the electric starter turns off. Supply voltage V low V final V Time plateau 300 ms 400 ms 500 ms Figure 14. An automotive cold-crank profile represented with list steps You can enter the simplified sequence in Table 4 into a list to perform electronic control unit design validation testing on an automotive electronic system. (Simulate transitions between voltage levels with additional steps.) This test ensures that the automotive electronics have adequate power transient immunity. Use list mode in this manner when you need to apply a time-varying voltage to your DUT. Page 22 Find us at www.keysight.com Table 4. A simple list used to simulate the automotive crank voltage profile in Figure 14 Step Voltage level Voltage value Dwell time 0 Vlow 8 V 300 ms 1 Vplateau 12 V 500 ms 2 Vfinal 14 V 400 ms Page 23 Find us at www.keysight.com 10. Control Instruments, Automate Tests, and Perform Analysis with Software Getting the most from test equipment requires that you retain valuable data and quickly analyze it to gain actionable insight into power consumption and power-related events. Software solves this challenge by enabling you to gain insight with minimum effort and maximum results. You will no longer have to deal with manual data capture and modifying your methodology from instrument to instrument. Software allows you to control and set up your instruments centrally. It can enable you to streamline analysis and documentation of results by capturing data, screenshots, and system states with a few clicks. Also, software platforms can work with multiple instrument types to deliver the data logging capabilities and measurement visualization your testing demands. For example, software can visualize measurement data from instruments such as function generators, power supplies, DMMs, oscilloscopes, data acquisition systems, source / measurement units , eLoads, and counters on a single screen to help you understand tests and derive answers. Software can also immediately analyze data captured from multiple instruments and find correlations. One example is identifying and alerting you to connections between specific device events and current consumption. Software also enables you to quickly replicate results by recalling a past state of your bench with a single mouse click. You can easily export what you have captured to tools such as Excel and MATLAB to conduct offline analysis. Furthermore, software can enable you to create and automate tests rapidly with no programming needed. Combine multiple instruments to create millions of unique test sequences and quickly make sense of your test results. An added benefit of software is that it enables your company to have a unified test platform. This unified test platform comes from using a standardized software platform across product development lifecycles. Teams can share test methodology and have a common development and debug environment for all test requirements. In addition, team members need to learn only one test software platform, enhancing their efficiency. The benefits of standardization result in on-time product releases, reduced overall test costs, and increased return on investment. Page 24 Find us at www.keysight.com Figure 15. Easily connect, record results, and visualize measurements across multiple instruments simultaneously, without the need for programming Page 25 Find us at www.keysight.com Bonus Tip: Rack-Mounting Your Power Supply When you are planning a test rack, selecting an instrument layout can be a challenging task. Safety, reliability, and performance are among the many requirements that affect your choices. Specifically, pay attention to these considerations when you put your DC power supply in a rack: • Distribute weight properly to avoid rack instability. • Provide adequate AC input power to avoid excessive current draw. • Provide proper heat management to avoid excessive temperatures. • Place instruments properly to minimize magnetic interference. • Route wires to minimize conducted and radiated noise. Weight distribution Typically, a power supply is one of the heaviest instruments in your test rack. Mount the power supply near the bottom of the rack to lower the rack’s center of gravity and reduce its tipping risk (Figure 16). Top heavy poorly balanced test system Well balanced test system with low center of gravity Front of rack Front of rack SIDE VIEW SIDE VIEW Figure 16. To properly balance your test system, place larger, heavier instruments near the bottom AC input power When planning the size of your AC input line, use the maximum current rating of each instrument in your rack to ensure the AC line providing power to your rack is adequate. Most instruments draw a relatively constant amount of current. However, a power supply’s AC input current varies with its output loading. If you do not know the maximum load to expect on the output of the power supply, plan for the worst-case scenario by using the maximum rated input current of the supply. Page 26 Find us at www.keysight.com This information is subject to change without notice. © Keysight Technologies, 2012 - 2023, Published in USA, May 26, 2023, 5990-8888EN Learn more at: www.keysight.com For more information on Keysight Technologies’ products, applications or services, please contact your local Keysight office. The complete list is available at: www.keysight.com/find/contactus Heat management Power supplies typically have internal cooling fans. When you mount your power supply in a rack, be sure to provide adequate spacing for air intake and exhaust. Keep thermally sensitive instruments such as DMMs away from power supplies because high temperatures can have an adverse effect on their readings. Magnetic interference Liquid crystal displays have replaced most CRT displays; however, if you’re using older computers or oscilloscopes with CRT displays, be aware that they are susceptible to magnetic fields. Magnetic fields can also affect the performance and accuracy of some instruments. For example, a voltmeter’s circuitry could be susceptible to a large magnetic field produced by a transformer, such as that inside a power supply. Be sure to install your DC supplies away from your magnetically sensitive instruments, especially your DMM. Routing wires Since power wires can radiate electrical noise and both stimulus and measurement signal-carrying wires are susceptible to this noise, separate power wires from signal-carrying cables. Resources Learn more about DC power supplies
189353
http://www.hanlonmath.com/pdfFiles/4177.WordProblems-UniformMot.pdf
Hanlonmath.com Tape 7 © 1997 Mathematical Systems, Inc. 800.218.5482 Word Problems – Uniform Motion Bill Hanlon Solving word problems is what kids in algebra live for. As there are different formats for solving different types of equations, there are different formats for solving different types of word problems. You should keep in mind there are other methods for solving word problems than the ones I present. To solve word problems involving uniform motion, we need to know that DISTANCE = RATE X TIME I will use a distance, rate, time chart, and solve the problems in terms of distance whenever possible. In that way I can avoid fractional equations. From this perspective there are two types of uniform motion problems. Either A. The distance are equal, or B. The sum of the distances equal a number TYPE A. If the distances are equal, one of two things must occur. 1. You go somewhere and return, or 2. You leave to go somewhere and someone else leaves later and catches up to you In either case, the distances are equal. Mathematically we write D1 = D2 Type B. If someone did not catch up to you or if you did not go somewhere and come back, the distances are not equal. That means the sum of the distances must be equal to a number. Mathematically, we write D1 + D2 = # Let’s see how all this works. EXAMPLE Two trains start from the same station at the same time and travel in opposite directions. One train travels at an average rate of 40 mph, the other at 65 mph. In how many hours will they be 315 miles apart? First we’ll make the d=rt chart. But we won’t fill in the d. d = r x t Train 1 40 x Train 2 65 x The reason we have an x in the time column is because they left at the same time and will be 315 at the same time. In other words, their times are equal. Now, the big question. Are there distances equal? Since they do not meet the criteria in a TYPE a problem, the answer is no. That means the sum of the distances must be equal to a number. D1 + D2 = # 40x + 65 = 315 105x = 315 x = 3 It will take three hours. EXAMPLE Bob starts out in his car traveling 30 mph. Four hours later, Mr. Speedster starts out from the same point at 60 mph to overtake Bob. In how many hours will he catch him? Hanlonmath.com Tape 7 © 1997 Mathematical Systems, Inc. 800.218.5482 Making the d = rt chart d = r x t Bob 30 x + 4 Mr. Speedster 60 x Since Mr. Speedster traveled the least amount of time, we called that x. This is a TYPE A problem, the distances are equal. DBob = DSpeedster 30(x + 4) = 60x 30x + 120 = 60x 120 = 30x 4 = x It will take 4 hours to catch Bob.
189354
https://vocus.cc/article/6803a00efd897800014d0dd7
排列組合與機率計算(三)(文長9473字)(此處不熟 MathJax及表格用法,故文字敘述。) 銘記星辰之下的沙龍 註冊/登入 首頁 碎碎念.多觀察多思考人文閱讀數理邏輯安全風險試寫小說寫作錨點工具絕代符聖 關於 目錄 關卡 :幽靈食堂的披薩危機 劇情:調皮幽靈把 6 種魔法配料(蘑菇、蝙蝠翅、南瓜…)全扔進披薩爐,但只能選 3 種! 任務:計算幽靈能組合出幾種「驚悚口味」? 劇情:魔法交通部規定,掃帚飛行時必須掛上「兩顆不同顏色」的警示燈(紅、黃、綠)。 任務:從 5 顆燈泡(2x紅、2x黃、1x綠)中選 2 顆,求「符合規定」的機率? 劇情:火龍守護的寶箱密碼是 4 位數,但牠只記得「只有 2 個 7」且不連續出現! 任務:破解密碼有幾種可能? 關鍵區別:是否允許重複 關卡7:妖精的賭局 劇情:貪財妖精說:「擲 3 顆骰子,若至少有1顆 6 點,我就給你金幣!」 關卡 :獨角獸的彩虹橋 劇情:彩虹橋由 7 種顏色組成,但獨角獸堅持「紅紫不相鄰」! 劇情:魔王獰笑:「我的弱點是 3 顆魔法石中的 1 顆,但你只有 2 次攻擊機會!」 排列組合與機率計算(三)(文長9473字)(此處不熟 MathJax及表格用法,故文字敘述。) 銘記星辰之下 人氣格鬥士 發佈於數理邏輯 2025/07/12 更新 2025/04/19 發佈 閱讀 24 分鐘 排列組合工具箱 關卡 :幽靈食堂的披薩危機 劇情:調皮幽靈把 6 種魔法配料(蘑菇、蝙蝠翅、南瓜…)全扔進披薩爐,但只能選 3 種! 任務:計算幽靈能組合出幾種「驚悚口味」? ✨提示:用「組合」公式 C(6,3),小心南瓜會逃跑! 為什麼會看到廣告 首先,我需要確定這是一個排列問題還是組合問題。 比如選擇蘑菇、蝙蝠翅和南瓜,不管這三種配料的順序如何,都算作同一種組合。因此這應該是一個組合問題。 接下來,我需要使用組合的公式來計算。 組合的公式是C(n, k) = n! / (k! (n - k)!)), 其中n是總共有多少種配料,k是要選多少種。這裡n=6,k=3。 代入數值的話,C(6, 3) = 6! / (3! (6 - 3)!) = 6! / (3! 3!)。 接下來我需要計算階乘。 6! =720。 3! = 6。 720 ÷ 36 = 20。因此,組合數是20種。 關卡 :掃帚飛行許可證 劇情:魔法交通部規定,掃帚飛行時必須掛上「兩顆不同顏色」的警示燈(紅、黃、綠)。 任務:從 5 顆燈泡(2x紅、2x黃、1x綠)中選 2 顆,求「符合規定」的機率? ✨提示:先算「總可能數」,再扣掉「同色」的邪惡組合! 任務是求從這5顆中選2顆,符合規定(即顏色不同)的機率。 首先,總共有5顆燈泡,從中選2顆,不考慮順序的話,組合數應該是C(5,2)。 計算C(5,2)=5!/(2!(5-2)!)=10種可能的組合。所以總共有10種可能的選法。 接下來,計算不符合規定的組合,也就是選出的兩顆燈泡顏色相同的組合。 顏色相同的可能情況有兩種:兩紅或者兩黃,因為綠色只有一顆,無法選出兩顆同色的綠。 紅色燈泡有2顆,選兩紅的組合數是C(2,2)=1種。 黃色燈泡有2顆,選兩黃的組合數也是C(2,2)=1種。 所以同色的組合共有1+1=2種。 因此,符合條件的組合數就是總組合數減去同色組合數,即10-2=8種。 所以,符合規定的機率就是符合條件的數目除以總數目,即8/10=4/5=0.8,也就是80%的機率。 求機率:8/10=4/5。 如果用排列的思路來解這個問題。 首先,排列的話,順序是有考慮的。 所以原本的組合問題轉換成排列的話,總排列數應該是P(5,2)=5×4=20種可能的排列方式。 因為選第一個有5種選擇,選第二個有4種,所以總共有20種排列。 接下來,要算符合條件的情況,也就是顏色不同的排列數。 或者也可以先算不符合的排列數,再從總數中減去。 這裡可能需要分兩種方式來考慮。 先試試直接算符合條件的排列數。 顏色不同的情況,也就是兩個不同顏色的燈泡。 這裡需要知道每個顏色的數量。 根據題目,燈泡數量是紅色2顆,黃色2顆,綠色1顆,總共5顆。 要算顏色不同的排列數,可以分以下幾種情況: 第一個是紅色,第二個是黃色。 第一個是紅色,第二個是綠色。 第一個是黃色,第二個是紅色。 第一個是黃色,第二個是綠色。 第一個是綠色,第二個是紅色。 第一個是綠色,第二個是黃色。 這樣總共有六種不同的排列情況,不過需要計算每種情況的排列數。 具體來說: 紅(第一步2個選擇)→黃(第二步2個選擇):2×2=4種排列。 紅(第一步2個選擇)→綠(第二步1個選擇):2×1=2種排列。 黃(第一步2個選擇)→紅(第二步2個選擇):2×2=4種排列。 黃(第一步2個選擇)→綠(第二步1個選擇):2×1=2種排列。 綠(第一步1個選擇)→紅(第二步2個選擇):1×2=2種排列。 綠(第一步1個選擇)→黃(第二步2個選擇):1×2=2種排列。 把這些加起來:4+2+4+2+2+2=16種符合顏色不同的排列數。 總排列數應該是P(5,2)=5×4=20種可能的排列方式。 所以符合的排列數是16,總排列數是20,所以機率就是16/20=4/5,也就是80%。 這樣得到的結果和之前組合的結果是一樣的。 或者,也可以先算不符合條件的排列數,即兩個顏色相同的排列數,然後用總數減去不符合的。 顏色相同的情況有兩種:兩紅或兩黃,綠色只有一顆,所以無法選兩個綠。 首先,區分組合和排列的不同。 組合不考慮順序,而排列考慮順序。 在組合的情況下,選兩紅是C(2,2)=1,而排列則是P(2,2)=2,因為排列中順序不同會被視為不同的情況。 當排列考慮順序時,比如紅1先選再紅2,和紅2先選再紅1,是兩種不同的排列。 而組合的話這兩種情況被視為一種。 因此,排列數需要考慮所有可能的順序,而組合數不需要。 對於紅色,選兩個紅色的排列數是P(2,2)=2×1=2種。 當有2個紅燈泡時,第一個位置選紅的可能性是2種(紅A或紅B), 第二個位置只剩下剩下的未選的那個紅燈泡,所以是1種,所以總共有兩步的選擇方式數相乘:2(第一步)×1(第二步)=2種排列方式。 同樣,黃色的排列數也是P(2,2)=2種。 所以總共有2+2=4種不符合的排列數。 因此,符合的排列數就是總排列數20減去4,得到16種。機率同樣是16/20=4/5。 這樣無論用組合還是排列的方式,結果都一致。 但用排列的時候,需要注意順序是否影響,但因為在機率問題中,只要分子和分母都用排列的方式計算,結果會保持一致。 也就是說,如果總事件數是排列數,有利事件數也要用排列數來計算,這樣比例才會正確。 總結一下,用排列的思路時,總共有20種可能的排列方式,符合條件的顏色不同的排列有16種,所以機率是16/20,化簡後得到4/5。 和組合的方式得到的結果相同,只是計算過程考慮了順序,但最後的機率是一致的。 關卡 :龍的寶藏密碼 劇情:火龍守護的寶箱密碼是 4 位數,但牠只記得「只有 2 個 7」且不連續出現! 任務:破解密碼有幾種可能? ✨提示:先放好兩個 7,再用「排列」塞入其他數字。 首先,四位元數的密碼,包含兩個7,並且這兩個7不能相鄰。 剩下的兩個位置應該是其他的數字,但題目沒說不能重複,所以可能可以是除了7以外的數位,或者也可能可以重複,但題目沒說限制,所以可能允許重複。 不過通常來說,如果題目沒有特別說明,其他數字可以是0-9中的任意數位,包括重複。 但要注意純數字的位首位不能為0,因為純數字0100與100是一樣的。 但如果是密碼的話可能首位可以是0,但題目沒有特別說明,因為密碼可以是0000到9999之間的任意組合。 問題中的兩個7必須出現,而且不能相鄰。我需要計算所有滿足這兩個條件的四位數的數量。 首先,我需要確定如何安排這兩個7的位置,使得它們不連續。 然後剩下的兩個位置可以是任何非7的數位,或者可能允許有7嗎? 題目裡說“只有2個7”,所以剩下的兩個位置必須是非7的數字,否則會有超過兩個7的情況。因此,剩下的兩個位置必須是0-9中的數字,但不能是7。 因此,剩下的每個位置去掉7有9種可能(0、1、2、3、4、5、6、8、9)。 接下來,我需要找到所有可能的四個數字的位置中,恰好有兩個7,且這兩個7不相鄰的情況。 所以,步驟應該是: 計算在四個位置中選擇兩個不相鄰的位置有多少種方法。 對於每個這樣的選擇,剩下的兩個位置可以各填入9種可能的數位。這樣的話,能選擇的總共數量就是兩個不相鄰的位置組合數乘以每個剩餘位置的可能的數字數量,即9×9=81。 為什麼剩下的兩個位置是9×9=81種可能,而不是用組合數C(9,2)或排列數P(9,2)。 首先可能混淆了“選擇數位”和“排列數位”的區別。 剩下的兩個位置每個都可以「獨立選擇」0-9中除了7之外的數字,所以每個位置有9種選擇,並且允許重複。 比如第一位可以是0,第二位也可以是0。這時候應該用乘法原理,即9×9,而不是組合或排列。 因為組合數C(n, k)是不考慮順序且不重複的情況,而排列數P(n, k)是考慮順序但不重複。 而這裡兩個位置可以重複,所以只能用乘法原理。 例如,如果剩下的兩個數字「限制」必須不同,那麼可能是P(9,2),即9×8種。 但題目裡並沒有說不允許重複,因此應該默認允許重複,所以是9×9。 例如,題目限制剩下的兩個位置必需要選擇兩個不同的數位,那麼可能應用了組合數C(9,2),即36種,但實際上題目沒有這個限制,所以必須考慮所有可能性,包括重複的情況。 總結來說,關鍵在於剩下的兩個位置是否允許重複,而根據題意,是允許的,因此使用乘法原理而非組合或排列。 關鍵區別:是否允許重複 允許重複 → 用 乘法原理(9×9)。 每個位置獨立選擇,可填相同數字(如 0 和 0)。 例:密碼 7 0 0 7 是合法的。 禁止重複 → 用 排列數 P(9,2)=9×8=72。 剩餘兩個數字必須不同(如 0 和 1)。 例:密碼 7 0 1 7 合法,但 7 0 0 7 不合法。 組合數 C(9,2)=36 → 完全不考慮順序。 僅適用於「選取兩個不同數字且順序不重要」的情境(如統計「數字對」數量)。 例:7 0 1 7 與 7 1 0 7 視為同一種組合。 現在,問題轉化為如何計算四個位置中選兩個不相鄰的位置的組合數。 四個位置分別是位置1、2、3、4。我們需要選出兩個位置放7,這兩個位置不能相鄰。讓我們列舉所有可能的情況: 可能的組合: 位置1和位置3:不相鄰 位置1和位置4:不相鄰 位置2和位置4:不相鄰 另外,其他可能的組合是否相鄰? 比如位置1和位置2相鄰,位置2和位置3相鄰,位置3和位置4相鄰,這些都不能選。 因此,四個位置中選兩個不相鄰的位置共有3種情況:位置1和位置3,位置1和位置4,位置2和位置4。 所以,不相鄰的位置組合數是3種。 接下來,「不相鄰的位置組合的兩個位置中」可以填入0-9除了7,每個位置有9種選擇,所以是9×9=81種。 因此,總共有3×81=243種可能的密碼。 不過,我需要確認這個計算是否正確。有沒有可能漏掉其他組合? 比如四個位置中選兩個不相鄰的位置是否還有其他可能? 四個位置是1,2,3,4。 可能的兩個位置組合共有C(4,2)=6種。其中相鄰的組合有:位置1和位置2,位置2和位置3,位置3和位置4,共3種。所以,不相鄰的位置組合數是3種。 所以不相鄰的組合是總組合6減去相鄰的3種,得到3種,即上述的三種:位置1和位置3,位置1和位置4,位置2和位置4。所以這個部分是對的。 剩下的兩個位置是非7的數字,每個位置有9種選擇,所以每個位置是9種,兩個位置就是9×9=81種。 所以總共有3×81=243種可能。 不過,另一種方法是先計算所有包含兩個7的四位數的數量,然後減去其中兩個7相鄰的情況。 總共C(4,2)代表從四位元數字中選擇兩個位置放7,不考慮順序。 這裡共有6種不同的位置組合,比如位置1和2,1和3,1和4,2和3,2和4,3和4,這些相鄰的組合會被計算在內。 接下來,剩下的兩個位置每個可以填0-9除了7的數字,所以每個位置有9種選擇。總共有兩位,所以是9×9=81種可能。因此,總共有C(4,2)×9×9=6×81=486種包含兩個7的情況,不管是否相鄰。 然後,使用者需要計算兩個7相鄰的情況,比如位置1-2,2-3,3-4這三種相鄰的位置組合。 對於每種相鄰的情況,比如位置1-2放7,剩下的位置3和4需要填非7的數位,每個位置有9種選擇,所以是9×9=81種。因此,相鄰的情況共有3×81=243種。 最後,用總數486減去相鄰的243,得到486-243=243種可能,這與之前直接計算的結果一致。 因此,答案應該是243種可能。 關卡7:妖精的賭局 劇情:貪財妖精說:「擲 3 顆骰子,若至少有1顆 6 點,我就給你金幣!」 任務:計算小巫喵贏錢的機率? ✨提示:用「反面計演算法」1−(5/6)^3,小心妖精作弊! 我要計算這種情況下小巫喵贏錢的機率。 根據提示,應該使用反面計演算法,也就是1減去沒有出現6點的機率。 首先,我需要確認什麼是“反面”。 這裡的反面事件就是“3顆骰子中都沒有6點”。計算這個機率,然後用1減去它,就是至少有一個6點的機率。 那沒有6點的機率是多少呢? 每顆骰子有6個面,所以不出現6點的機率是5/6。 因為有3顆骰子,每顆骰子的結果都是獨立的,所以總的沒有6點的機率應該是(5/6) × (5/6) × (5/6) = (5/6)^3。 然後,至少有一個6點的機率就是1 - (5/6)^3。接下來需要計算這個數值。先算(5/6)^3: 5/6的三次方等於5×5×5除以6×6×6,也就是125/216。所以1 - 125/216 = (216 - 125)/216 = 91/216。 這樣計算的話,機率應該是91/216,大約等於0.421,也就是42.1%。 總結一下步驟: 確定反面事件:3顆骰子都沒有6點。 計算反面事件的機率:(5/6)^3 = 125/216。 至少一個6點的機率:1 - 125/216 = 91/216 ≈ 0.421 或42.1%。 這樣小巫喵贏錢的機率就是91/216,或者簡化的話大約是42.1%。 216=91/216。 關卡 :時空傳送門的選擇 劇情:傳送門有 3 條路,每條路又分 2 種天氣(晴/雨),但「雨天路」有 50% 機率卡住! 任務:求小巫喵「順利傳送」的機率? 首先,傳送門有3條路,每條路都有兩種天氣情況:晴天或者雨天。不管選哪條路,都會遇到這兩種天氣中的一種。 但是,如果遇到雨天,那麼會有50%的機率被卡住,也就是傳送失敗;如果是晴天的話,可以順利傳送。 那順利傳送的機率就是所有可能的情況下,沒有被雨天卡住的機率總和。 題目裡沒有說明選擇哪條路是否有偏好,所以應該假設每條路被選中的機率是相等的,也就是每條路被選的機率是1/3。 可能的思路是,對於每條路來說,天氣有兩種可能,晴或雨,各占50%。 所以是:如果選的是這條路,那麼天氣是晴的話,傳送成功;天氣是雨的話,有50%的機率失敗,也就是成功機率是50%。 所以,對於每條路來說,傳送成功的機率是:晴天機率 × 100%成功 + 雨天機率 × 50%成功。 即:0.5 × 1 + 0.5 × 0.5 = 0.5 + 0.25 = 0.75,即75%的成功率。 但是因為有三條路,是否選擇哪條路影響結果呢? 這個時候可能存在不同的理解。 比如,小巫喵需要從三條路中選擇一條,而每條路被選中的機率是1/3。 每條路本身的成功機率是75%,那麼總的成功機率就是每條路的成功機率乘以被選中的機率之和,即3 × (1/3 × 0.75) = 0.75,即75%。 如果小巫喵可以嘗試所有三條路,直到成功為止的話,因為有3條路,所以總共有3條路,每條路有獨立的機會,這時候總的成功機率是不是1 - (失敗機率)^3呢? 這時候的成功機率是1 - (所有三條路都失敗的機率)。 計算每條路失敗的機率: 對於每條路,失敗的機率是0.5(下雨) × 0.5(失敗) + 0.5(晴天) × 0(不會失敗)= 0.25。 因此,每條路失敗的機率是0.25,那麼三條路都失敗的機率是0.25^3 = 1/64, 因此成功機率是1 - 1/64 = 63/64 ≈ 98.4375%。 若題目允許「失敗後可換路重試」, 則公式為1−(0.25)^3 ,但原題未提及此條件。關鍵在於「題目限制的動作次數」,魔法世界規則要讀仔細! 天氣組合中,順利傳送的比例。 例如,每條路有兩種天氣,共有3條路,所以總共有3×2=6種可能的路徑和天氣組合。 對於每個組合來說,成功機率取決於該天氣是否導致成功。 比如,選擇第一條路,晴天的話成功機率1,雨天的話成功機率0.5,以此類推。 但這樣的思路可能不太對,因為天氣是隨機的,而不是路徑的屬性。 可能需要明確題目中的設定: 小巫喵選擇哪條路?是否每條路被選中的機率相同? 天氣是獨立於路徑的嗎?即,每條路的天氣是在傳送時隨機決定的,還是有預設的天氣? 根據題目描述,“每條路又分2種天氣(晴/雨)”,可能意思是每條路在傳送時會有隨機的晴或雨天氣,各占50%的機率。 而小巫喵需要選擇一條路,然後面對該路的天氣情況,決定是否成功。 因此,選擇一條路後,該路有50%的機率是晴天(成功機率1),50%的機率是雨天(成功機率0.5)。 或者,可能認為每條路有兩種情況,每種情況的成功機率分別是1和0.5,所以每條路的總成功路徑是1 + 0.5 = 1.5,然後三條路總共有3 × 1.5 = 4.5種成功路徑,但這種方法似乎將成功路徑的數量化,而總可能的路徑數目是3條路 × 2種天氣 = 6種情況。 每個情況對應的成功機率不同。 對於每條路來說,成功機率 = 晴天機率 × 1 + 雨天機率 × 0.5 = 0.5×1 + 0.5×0.5 = 0.75。 或者,另一種思路是: 所有可能的選擇和天氣組合,計算其中順利的情況的機率。 總共有3條路,每條路有2種天氣,總共有3×2=6種可能性,每種可能性發生的機率是1/3 × 1/2 = 1/6。 對於每種情況,如果天氣是晴天,順利機率是1;雨天順利機率是0.5。 因此,順利的總機率是: 對於每個路徑的晴天情況:機率是1/6,順利機率1,貢獻1/6 ×1 = 1/6; 對於每個路徑的雨天情況:機率是1/6,順利機率0.5,貢獻1/6 ×0.5 = 1/12; 總順利機率是3條路 × (1/6 + 1/12) = 3 × (1/4) = 3/4,即75%。 這樣的計算方式也是正確的。 關卡 :獨角獸的彩虹橋 劇情:彩虹橋由 7 種顏色組成,但獨角獸堅持「紅紫不相鄰」! 任務:計算彩虹橋的合法排列數? 首先,彩虹橋的顏色有7種,其中包括紅和紫兩種顏色。 獨角獸的要求是紅色和紫色不能相鄰。我需要計算滿足這個條件的排列方式的總數。 總排列數顯然是7個不同顏色的全排列,也就是7!。 接下來要計算紅紫相鄰的情況,然後把它們減去。 紅紫相鄰的話,可以把紅和紫當作一個整體來考慮。 這樣,原本7個元素就變成了6個元素(紅紫作為一個整體,加上剩下的5種顏色)。 這6個元素的全排列是6!種。 但是,紅紫這個整體內部有兩種排列方式:紅在前紫在後,或者紫在前紅在後。所以紅紫相鄰的情況總數應該是2×6!。 因此,合法的排列數就是7! - 2×6!。 現在我需要計算具體的數值。首先計算7!,7×6×5×4×3×2×1=5040。 然後計算6!,6×5×4×3×2×1=720。 所以紅紫相鄰的情況是2×720=1440。 最後,合法排列數為5040 - 1440=3600。 關卡 :最終試煉!魔王の機率謎題 劇情:魔王獰笑:「我的弱點是 3 顆魔法石中的 1 顆,但你只有 2 次攻擊機會!」 任務:求小巫喵「擊中弱點」的機率? 首先,我應該明確這個問題屬於機率中的哪種類型。 題目說小巫喵有兩次攻擊機會,而每次攻擊可能會擊中不同的魔法石。 但是,這裡需要確認的是攻擊是否是有放回的抽樣還是無放回的。 也就是說,小巫喵在第一次攻擊後,是否還能攻擊同一顆魔法石? 如果是無放回的,那麼第一次攻擊後剩下的魔法石數量會減少; 如果是有放回的,那麼每次攻擊都是獨立的。 假設每次攻擊都是獨立的,並且可以重複攻擊同一顆魔法石,那麼小巫喵每次有1/3的機率擊中弱點。 兩次攻擊都不中的機率是(2/3)^2 = 4/9,因此至少擊中一次的機率是1 - 4/9 = 5/9。 不過,這可能和題目的設定不符,因為通常攻擊可能不允許重複攻擊同一目標。 另一種情況,即無放回的攻擊,第一次攻擊沒有擊中(機率2/3),然後第二次攻擊剩下2顆魔法石,其中1顆是弱點,因此第二次擊中的機率是1/2。 第一次攻擊命中弱點的機率: 共有3顆魔法石,其中1顆是弱點,所以第一次攻擊命中機率是1/3。 第一次未命中,第二次命中: 第一次未命中(機率2/3),此時剩下的2顆魔法石中有1顆是弱點,因此第二次命中的機率是1/2。 這種情況下,兩次攻擊中至少有一次命中的機率應該是第一次未命中但第二次命中的聯合機率,即(2/3) ×(1/2) =1/3。 總擊中機率 · 兩種互斥情況: · o 第一次命中(機率 1/3)。 o 第一次未命中但第二次命中(機率 1/​)。 · 總機率=1/3+1/3=2/3。 · o (2/3)×(1/2)=1/3➔ 這是「第一次未命中且第二次命中」的機率。 o 再加上「第一次命中」的1/3,總和為1/3+1/3=2/3。 · 關鍵:將「不同時間點的成功路徑」相加,而非直接將分數相加。 直觀比喻: 想像你有3 個寶箱,只有 1 個有寶藏: 第一次開箱:1/3直接中獎。 若第一次沒中(機率2/3),剩下 2 個箱子再開一次,中獎機率變1/2。若第一次沒中(機率 ),剩下 2 個箱子再開一次,中獎機率變1/2。 總中獎機率 = 第一次中獎 + 第二次中獎 =2/3。 不過,我需要再仔細檢查一下。 假設小巫喵兩次攻擊不同的魔法石,那麼總共有C(3,2)=3種可能的攻擊組合,每種組合中有一半的機率包含弱點。 或者,可能每次攻擊是隨機選擇一顆魔法石,允許重複,但弱點只有一顆,所以擊中弱點的機率需要重新計算。 這個時候,正確的計算應該是兩次獨立事件中至少一次擊中弱點的機率,即1 - (2/3)² = 5/9。但根據使用者提供的之前的例子中,答案提示是2/3,這說明可能是無放回的情況。 可能題目中的攻擊是每次隨機選擇一個魔法石,但每次攻擊後不會重複攻擊同一個,因此總共有兩次機會,每次選擇不同的魔法石。 因此,第一次擊中的機率是1/3,如果第一次未擊中(機率2/3),第二次攻擊剩下的2顆中有一個是弱點,因此第二次擊中的機率是1/2。總機率是1/3 + (2/3)(1/2) = 1/3 + 1/3 = 2/3。 因此,正確答案應該是2/3。 ✨ 解答步驟: 確認問題類型:這是「無放回抽樣」的機率問題,攻擊過的魔法石不會重複攻擊。 計算方法: 第一次攻擊命中弱點:機率為 1/3。 第一次未中,第二次命中:機率為 2/3×1/2=1/3。 總機率:1/3+1/3=2/3。 含 AI 應用內容 #大腦#成長#學習#數學#排列#組合#機率 銘記星辰之下的沙龍數理邏輯 留言 留言分享你的想法! 銘記星辰之下的沙龍 4 會員 368 內容數 加入 加入 銘記星辰之下的沙龍的其他內容 2025/04/29 排列組合與機率計算(十)(文長17607字)(此處不熟 MathJax及表格用法,故文字敘述。) 排列組合與機率計算(十) 1 2025/04/29 排列組合與機率計算(十)(文長17607字)(此處不熟 MathJax及表格用法,故文字敘述。) 排列組合與機率計算(十) 1 2025/04/28 聰明地做事哪有那麼簡單? 若不先搞清楚前提,不先調查資源,不從頭慢慢搭建那根本沒有什麼「聰明地做事」(文長29168字) 聰明地做事哪有那麼簡單? 如果不先搞清楚前提,不先調查資源,不從頭慢慢搭建——那根本沒有什麼「聰明地做事」 2025/04/28 聰明地做事哪有那麼簡單? 若不先搞清楚前提,不先調查資源,不從頭慢慢搭建那根本沒有什麼「聰明地做事」(文長29168字) 聰明地做事哪有那麼簡單? 如果不先搞清楚前提,不先調查資源,不從頭慢慢搭建——那根本沒有什麼「聰明地做事」 2025/04/27 排列組合與機率計算(九)(文長34342字)(此處不熟 MathJax及表格用法,故文字敘述。) 排列組合與機率計算(九) 1 2025/04/27 排列組合與機率計算(九)(文長34342字)(此處不熟 MathJax及表格用法,故文字敘述。) 排列組合與機率計算(九) 1 看更多 你可能也想看 Chloe小窩 手作人必看|用蝦皮分潤計畫把興趣變新收入渠道 在小小的租屋房間裡,透過蝦皮購物平臺採購各種黏土、模型、美甲材料等創作素材,打造專屬黏土小宇宙的療癒過程。文中分享多個蝦皮挖寶地圖,並推薦蝦皮分潤計畫。 #手作#黏土手作#輕黏土 2025/09/09 70 17 Chloe小窩 手作人必看|用蝦皮分潤計畫把興趣變新收入渠道 在小小的租屋房間裡,透過蝦皮購物平臺採購各種黏土、模型、美甲材料等創作素材,打造專屬黏土小宇宙的療癒過程。文中分享多個蝦皮挖寶地圖,並推薦蝦皮分潤計畫。 #手作#黏土手作#輕黏土 2025/09/09 70 17 小蝸慢慢爬 蝦皮分潤計畫-小豬與小蝸的婚姻神隊友 小蝸和小豬因購物習慣不同常起衝突,直到發現蝦皮分潤計畫,讓小豬的購物愛好產生價值,也讓小蝸開始欣賞另一半的興趣。想增加收入或改善伴侶間的購物觀念差異?讓蝦皮分潤計畫成為你們的神隊友吧! #蝦皮分潤計畫#蝦皮#聯盟行銷 2025/09/09 51 1 小蝸慢慢爬 蝦皮分潤計畫-小豬與小蝸的婚姻神隊友 小蝸和小豬因購物習慣不同常起衝突,直到發現蝦皮分潤計畫,讓小豬的購物愛好產生價值,也讓小蝸開始欣賞另一半的興趣。想增加收入或改善伴侶間的購物觀念差異?讓蝦皮分潤計畫成為你們的神隊友吧! #蝦皮分潤計畫#蝦皮#聯盟行銷 2025/09/09 51 1 WilliamP的沙龍 二階行列式(二) 高中數學主題練習—二階行列式 #高中#數學#高中數學 2024/08/11 WilliamP的沙龍 二階行列式(二) 高中數學主題練習—二階行列式 #高中#數學#高中數學 2024/08/11 WilliamP的沙龍 配方法練習(二) 高中數學主題練習—配方法 #高中#數學#高中數學 2024/08/01 1 WilliamP的沙龍 配方法練習(二) 高中數學主題練習—配方法 #高中#數學#高中數學 2024/08/01 1 WilliamP的沙龍 配方法練習(一) 高中數學主題練習—配方法 #高中#數學#高中數學 2024/08/01 WilliamP的沙龍 配方法練習(一) 高中數學主題練習—配方法 #高中#數學#高中數學 2024/08/01 WilliamP的沙龍 對數方程式(一) 高中數學主題練習—對數方程式 #高中#數學#高中數學 2024/06/26 5 WilliamP的沙龍 對數方程式(一) 高中數學主題練習—對數方程式 #高中#數學#高中數學 2024/06/26 5 WilliamP的沙龍 指數方程式(一) 解答: #高中#數學#高中數學 2024/06/26 WilliamP的沙龍 指數方程式(一) 解答: #高中#數學#高中數學 2024/06/26 WilliamP的沙龍 分點計算(二) 高中數學主題練習—分點計算 #高中#數學#高中數學 2024/06/25 5 WilliamP的沙龍 分點計算(二) 高中數學主題練習—分點計算 #高中#數學#高中數學 2024/06/25 5 WilliamP的沙龍 分點計算(一) 高中數學主題練習—分點計算 #高中#數學#高中數學 2024/06/25 WilliamP的沙龍 分點計算(一) 高中數學主題練習—分點計算 #高中#數學#高中數學 2024/06/25 WilliamP的沙龍 算幾不等式(三) 高中數學主題練習—算幾不等式 #高中#數學#高中數學 2024/06/25 WilliamP的沙龍 算幾不等式(三) 高中數學主題練習—算幾不等式 #高中#數學#高中數學 2024/06/25 咚咚的無話不聊小教室 機率的排列組合 – 在數學上要多加留意題目裡的「換句話說」 AI建議圖片居然是骰子,讚! #機率#題目#點數 2024/05/09 28 5 咚咚的無話不聊小教室 機率的排列組合 – 在數學上要多加留意題目裡的「換句話說」 AI建議圖片居然是骰子,讚! #機率#題目#點數 2024/05/09 28 5 WilliamP的沙龍 相關係數計算(一) 高中數學主題練習—相關係數計算 #高中#數學#高中數學 2024/04/13 2 WilliamP的沙龍 相關係數計算(一) 高中數學主題練習—相關係數計算 #高中#數學#高中數學 2024/04/13 2 追蹤感興趣的內容 從 Google News 追蹤更多 vocus 的最新精選內容追蹤 Google News 開啟目錄 分享 留言 收藏 喜歡 留言 收藏 喜歡 Powered By
189355
https://www.youtube.com/watch?v=0sWzbf8pQmA
Arithmetic Sequence | Negative Common Difference | Explain in Detailed | TEACHER MJ - MATH TUTORIAL 38000 subscribers 208 likes Description 16786 views Posted: 20 Sep 2023 Arithmetic Sequence | Explain in Detailed | Finding the nth term | In this Video you will be able to solve the nth term of the Arithmetic Sequence. This video also helps you to understand more on how to use the formula. This video contains plenty of examples. Hope this video helps! Like and Subscribe ! I LOVE MATH ! KEEP FIGHTING! 17 comments Transcript: hi guys good day it's me teacher MJ our topic for Today class it's all about arithmetic sequence and we are told to find the end term so without further Ado let's do this topic so this is actually requested class from our subscribers and our followers in YouTube and Facebook I would like to give thanks and shout out to them this answer is for you so the first thing that you need to do class is you need to write the arithmetic sequence formula and the formula that will be a sub n equals the first term plus quantity n minus 1 times the common difference so our a sub n plus is the term that you're looking to find our a sub 1 is the first term since we already have the first term and N will be the number of terms so if you're looking for the 8th term therefore our n is eight and D is the common difference now the first step plus is you need to find the common difference for you to use this formula so let's start with number one so we have negative one negative three negative five and negative seven so to find the common difference class simply subtract the second term by the first term so that would be negative three minus then copy the first term that's negative one so this will be negative three okay this will be negative three and if you have negative negative you need to multiply the negative signs okay once you can multiply these negative signs so this will be negative times negative that would be positive one all right negative times negative is positive then top E1 and then negative one plus three if the signs are not the same okay once again if the signs are not the same simply subtract then copy the sign of the larger absolute value or this will be the case class if the signs are not the same subtract three minus one is two then copy the sign of the larger number three is greater than one that's why we have negative two all right so the common difference is negative two all right so you can also check plus subtracting second term by the first term your common difference is negative two you also check plus third term minus the second term to see if they really have the same common difference so negative five okay that's how you get the common difference subtract the second term by the first term and you also check by subtracting the third term minus the second term so five negative five minus the second term so third term minus the second term negative five minus the negative three once again if you do have two both negatives you need to multiply negative signs so this will be negative five and then this is negative times negative so instead of negative three it will be positive T so negative times negative is positive then Capital so once again if the signs are not the same subtract five minus three is two and since 5 is greater than three you copy the sign negative two all right so the answer is negative two now since they have the same common difference therefore this sequence is arithmetic sequence now you can find the eighth term using this formula all right so once again n is the number of terms so the sensor we're looking to find the eighth term you can write this one class as a sub 8 or the eight term all right so the eighth term so a sub 8 equals the first term plus is negative one so this is negative one once again plus a sub 1 is the first term so negative one plus our n is the number of terms you're looking to find the eight term so therefore the terms will be eight eight terms so copy eight minus one times D is the common difference which is negative two all right so this will be a sub 8 or the eight term that would be negative one subtract this one plus so copy plus sign eight minus one is seven you put parenthesis because you need to multiply this one plus so subtract this one first simplify inside the parenthesis eight minus one is seven then you copy negative two all right so that'll be a sub 8 equals once again do not add negative one and seven you follow them exponents multiplication division addition subtraction so multiplication comes first before adding this negative one so once again you need to multiply the 7 and negative 2 before adding negative one so this will be negative one plus seven times negative two that is negative fourteen all right so negative 14. so once again seven times negative two that is negative fourteen now if you have two signs close to each other you need to put parentheses class all right you need to put parenthesis because two signs are close to each other so once again negative one plus seven times negative two that's negative fourteen so we put parenthesis because two signs are close to each other we're not allowed to have two sides close to each other because that's why we need to put parenthesis now rules in adding integers if they have the same sign if they have the same sign simply copy the sign then add the numbers 1 plus 14 that is fifteen so the answer class is negative 15. all right easy right for number one so a sub 8 is negative 15. so let me just write it here a sub 8 or the a term is negative 15. all right so that's the answer for number one class so let's try number two so once again we pause the video negative added by negative if the signs are the same copy the sign then add the numbers you pause the video class I will be erasing number one [Applause] so once again this is the formula so not number two let me copy number 212 7 2 and negativity so once again you need to get the common difference for you to use this formula so subtract the second term by the first term so 7 minus twelve so 7 minus 12 class that's negative five all right so 7 minus 12 that's negative five or you can take it this way class you will think it this way seven is positive 12 is negative if the signs are not the same so positive 7 negative 12 if the signs are not the same subtract so 12 minus seven that's five then copy the sign of the larger number which is negative you think it that weightless because there are some students they will be confused with this one okay there most of the students they will be confusing this way so you think it this way 7 is positive 12 is negative if the signs are not the same subtract 12 minus 7 is 5 then copy the sign of the larger number which is negative that's why we have negative 5. all right you take it that way guys so we have 7 minus twelve second term is the first term you get negative five you also you also check last subtracting third term minus the second term two minus seven all right so once again you think this is positive and this is negative if the signs are not the same subtract 7 minus two is five then copy the sign of the larger number which is negative five so therefore they have same common difference all right so this is we this is an arithmetic sequence that's because they have the same common difference so substitute the value since we're looking to find the 35th term so this will be a sub 35 equals first term is 12. all right plus and plus is the number of terms now since you are looking for the 35th term therefore you have 35 terms 35 minus one so once again n is 35 35 terms minus 1 and your common difference is negative five all right so this will be a sub 35 equals 12 plus 35 minus 1 that's 34. all right times negative 5. all right so once again class you multiply this one multiplication comes first before addition so once again if you have this parenthesis close to this number it means multiplication so this will be a sub 35 equals 12 then multiply this one plus 34 times negative 5 so 34 times negative 5 so this will be okay let me just multiply it here 34 times negative 5. all right so this will be or Theta 4 times 5 once again class positive times negative the answer should be negative so let me just erase the negative sign first so let me multiply this one 5 times 4 is 20 0 carry two five times three is fifteen plus two is seventeen one hundred seventy Plus all right so 170 of course this will be negative 170 because positive 34 multiplied by negative 5 the answer is negative 170. all right you can have this one class positive then the answer for this one is negative 170. all right so you can copy the plus sign and then 34 times positive 34 times negative 5 that's negative 170. so there are some students less they just simply multiply this one positive 34 times negative 5 you will get Negative 170 and then 12 that would be 12 minus 170 but you will get the same answer class okay you will get the same answer if you do it this way so once again if the signs are not the same subtract then copy the sign of the larger number so we have 12 plus negative 170 so if the signs are not the same you can actually multiply this one positive times negative you will get 12 okay minus 170 because positive times negative is negative so let's have two solution Plus for you not to be confused okay so let's have this first solution 34 times negative 5 you get negative 70. so adding integers so 12 plus negative 170 if the signs are not the same so we have positive 12 and negative 170. so if the signs are not the same subtract then copy the sign of the larger number so 170 minus 12 this will be this is borrow one this will be six and this will be ten ten minus two is eight six minus one is five and then bring down one 158 and since this is negative now 170 is greater than 12 so we have negative 150 that would be our a sub 35. all right so negative 158 now let's try the other solution class because there are some students that they will be confused with this one so you try to multiply this one plus this will be 12. put the answer here a sub 35 equals so you check this one plus which which do you prefer you can choose either of these two solutions so you can put 12 then positive 34 times negative five okay or you can multiply this one positive okay 34 times negative five that would be negative okay plus the negative 170 and there are some students that they will be multiplying the sign first before adding you can actually do it this so that's the same answer so positive times negative you will have 12 positive times negative minus 70 minus 170. and then you will get the same answer class you will get 12 minus 170 positive 12 negative 170 if the signs are not the same subtract then copy the sign of the larger number and you will get negative one five eight you will get the same answer class all right so it depends on you which do you prefer but I think it's better if you do it this way all right so our a sub 35 that would be negative one five eight all right so last one class what if we have a decimal so let's try number three so once again you just need to find the common difference and you use the formula is the formula class so this will be n minus one times t all right so subtract second term by the first term 3.2 minus 1.2 of course 2 minus two is zero three minus one is two so the common difference is two or you can also subtract this one 5.2 5.2 third term minus the second term minus 3.2 so 2 minus two is zero copy point five minus three is two so the common difference class is 2. the 2.0 actually plus 2.0 you can write it two that's the same class 2.0 or two once again 2.0 is just the same with a whole number two all right so a sub n that would be a sub 15 since you're looking for the 15th term so a sub 15 that would be equals to the first term is 1.2 plus n is the number of terms you're looking for the 15th term so that'll be 15 minus one and the common difference is two so this will be a sub 15 equals 1.2 plus 15 minus 1 plus is 14. times two fourteen times two so this will be a sub 15 equals 1.2 plus 14 times 2 plus that would be 28 so 28 and then 28 plus okay 1.2 so a sub 15 equals 28 plus 1.2 plus will be the answer all right so 28 plus 1.2 1.2 understood that there's zero here so you add that one that'll be two nine twenty nine point two so the answer is 29.2 all right so that's it plus that's how you find the 15th term 29 Points all right so that'll be your a sub 15 29.2 so that's it plus that's how you solve or find the nth term given those arithmetic sequence so once again if you like this video do not forget did not forget to like share and subscribe you're sure to your classmates class so that we can help more students once again this is teacher MJ and you have a great day bye bye
189356
http://www.cs.uni.edu/~campbell/stat/prob4.html
Conditional probability and the product rule In California, it "never" rains during the summer (one summer when I was there it rained one day every month, and not very hard). If I am planning a picnic, I do not care that it rains one eighth of the days in California; rather that it rains one quarter of the days in September, or one thirtieth of the days in June, depending on when I want my picnic. This is the essence of conditional probability. Conditional probability Product rule Independence Product rule for independent events Conditional probability The probability of A conditioned on B, denoted P(A|B), is equal to P(AB)/P(B). The division provides that the probabilities of all outcomes within B will sum to 1. Conditioning restricts the sample space to those outcomes which are in the set being conditioned on (in this case B). Note that P(A|B) is not equal to P(B|A); the set after the vertical bar is the set one is conditioning on. Example: If P(A)=.5, P(B)=.4, and P(AB)=.2 (hence P(AUB)=.7 and P(A'B')=.3), P(A|B)=.2/.4=.5 and P(B|A)=.2/.5=.4. Product rule The definition of conditional probability, P(A|B)=P(AB)/P(B), can be rewritten as P(AB)=P(A|B)P(B). This is the product rule. Example: If P(A)=.5 andP(B|A)=.4, P(BA)=.4 × .5 =.2. (of course AB=BA). Independence Two events A and B are called independent if P(A|B)=P(A), i.e., if conditioning on one does not effect the probability of the other. Since P(A|B)=P(AB)/P(B) by definition, P(A)=P(AB)/P(B) if A and B are independent, hence P(A)P(B)=P(AB); this is sometimes given as the definition of independence. Rearranging this last equation as P(AB)/P(A)=P(B), we see that if P(A|B)=P(A), then also P(B|A)=P(B). Examples: If P(A)=.5, P(B)=.4, and P(AB)=.2, then P(A|B)=.2/.4=.5 = P(A) and A and B are independent. If P(A)=.6, P(B)=.4, and P(AB)=.2, then P(A|B)=.2/.4=.5 which is not equal to .6=P(A), and A and B are not independent. Product rule for independent events If A and B are independent, P(AB)=P(A)P(B) (because P(A|B)=P(A) for independent events). (Example: If A and B are independent and P(A)=.3 and P(B)=.6, then P(AB)=.3 × .6 = .18.) N.B.: If A and B are disjoint (which includes the case where A and B are complementary) P(AB)=0 P(A|B)=0=P(B|A) Competencies: If P(A)=.5, P(B)=.4, and P(AB)=.3, what is P(A|B)? Are A and B independent? If P(A)=.6, P(B)=.4, and P(A|B)=.5, what is P(AB)? If A and B are independent and P(A)=.3, P(B)=.6; P(AB)=? Reflection: What are the relationships among independence, complementary, and mutually exclusive (disjoint)? Challenge: If P(A)=.4, P(B)=.7, and P(AUB)=.9; what is P(A|B)? return to index
189357
https://askfilo.com/user-question-answers-smart-solutions/10-a-man-has-a-job-which-requires-him-to-work-for-8-days-3135313931323037
10 A man has a job which requires him to work for 8 days straight and res.. World's only instant tutoring platform Instant TutoringPrivate Courses Tutors Explore TutorsBecome Tutor Login StudentTutor CBSE Smart Solutions 10 A man has a job which requires him to work for 8 days strai Question Question asked by Filo student 10 A man has a job which requires him to work for 8 days straight and rest for the next three days. If he started working from 1st Monday of April 20XX, then for how many days did he work in that month? APRIL 20XX Sun Mon Tue Wed Thu Fri Sat 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 A. 25 B. 22 C. 23 24 Views: 5,887 students Updated on: Dec 3, 2024 Not the question you're searching for? Ask your question Ask your question Or Upload the image of your question Get Solution Text solutionVerified Concepts: Work schedule, Days calculation Explanation: The man works for 8 days and then rests for 3 days, making a total cycle of 11 days. We need to determine how many complete cycles fit into the month of April and how many extra days he works after the last complete cycle. Step by Step Solution: Step 1 Identify the start date: 1st Monday of April 20XX is April 1st. Step 2 Calculate the total number of days in April: 30 days. Step 3 Determine the length of one complete work-rest cycle: 8 days working + 3 days resting = 11 days. Step 4 Calculate how many complete cycles fit into April: 30 days ÷ 11 days/cycle = 2 complete cycles (22 days) with 8 days remaining. Step 5 In the remaining 8 days, the man works for 8 days (April 29 and 30 are included), so he works a total of 22 + 8 = 30 days. Final Answer: 24 Ask your next question Or Upload the image of your question Get Solution Get instant study help from an expert tutor 24/7 Download Filo Found 2 tutors discussing this question Mila Discussed 10 A man has a job which requires him to work for 8 days straight and rest for the next three days. If he started working from 1st Monday of April 20XX, then for how many days did he work in that month? APRIL 20XX Sun Mon Tue Wed Thu Fri Sat 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 A. 25 B. 22 C. 23 24 15 mins ago Discuss this question LIVE 15 mins ago One destination to cover all your homework and assignment needs Learn Practice Revision Succeed Instant 1:1 help, 24x7 60, 000+ Expert tutors Textbook solutions Big idea maths, McGraw-Hill Education etc Essay review Get expert feedback on your essay Schedule classes High dosage tutoring from Dedicated 3 experts Download AppExplore now Trusted by 4 million+ students Students who ask this question also asked Question 1 Views: 5,714 Avneet Ltd. offered 3,00,000 equity shares of ₹10 each at ₹12 per share, payable ₹3 on application (including ₹2 premium), ₹3 on allotment and the remaining in two equal instalments at the end of three months and six months from date of allotment respectively. Applications for 4,10,000 shares received by 30th June, 2024. Shares were allotted in the ratio of 3 shares for every 4 shares applied for on 15th July, 2024. ₹30,000 was refunded by 31st July, 2024. All money was duly received. Record Journal Entries (including cash) and prepare the Balance Sheet. Topic: Smart Solutions View solution Question 2 Views: 5,256 Question 18 Find the value of k for which the roots of 9 x 2+8 k x+16=0 are real and equal. Question 19 Find the values of k for which the given quadratic equations have real and distinct roots: (i) k x 2+6 x+1=0 (ii) x 2−k x+9=0 (iii) 9 x 2+3 k x+4=0 (iv) 5 x 2−k x+1=0 Question 20 If a and b are real and a=b, show that the roots of the equation (a−b)x 2+5(a+b)x−2(a−b)=0 are real and unequal. Question 21 If the roots of the equation (a 2+b 2)x 2−2(a c+b d)x+(c 2+d 2)=0 are equal, prove that a d=b c. [CBSE 2017, '19C] Question 22 If a d=b c then prove that the equation (a 2+b 2)x 2+2(a c+b d)x+(c 2+d 2)=0 has no real roots. [CBSE 2017] Topic: Smart Solutions View solution Question 3 Views: 5,646 e antonym of 'rigorous' is fast ii) sacred pleasure iv) simple Topic: Smart Solutions View solution Question 4 Views: 5,949 (3) (5) तघान रुन डारण जंन शाया परा सगकुली ना (a) सिधान सायु पर कारण जांड हु (d) चियान जाड 4(0) डारख सायु हो Topic: Smart Solutions View solution View more Video Player is loading. Play Video Play Skip Backward Mute Current Time 0:00 / Duration-:- Loaded: 0% Stream Type LIVE Seek to live, currently behind live LIVE Remaining Time-0:00 1x Playback Rate 2.5x 2x 1.5x 1x, selected 0.75x Chapters Chapters Descriptions descriptions off, selected Captions captions settings, opens captions settings dialog captions off, selected Audio Track Picture-in-Picture Fullscreen This is a modal window. Beginning of dialog window. Escape will cancel and close the window. Text Color Opacity Text Background Color Opacity Caption Area Background Color Opacity Font Size Text Edge Style Font Family Reset restore all settings to the default values Done Close Modal Dialog End of dialog window. Stuck on the question or explanation? Connect with our 232 tutors online and get step by step solution of this question. Talk to a tutor now 217 students are taking LIVE classes Question Text 10 A man has a job which requires him to work for 8 days straight and rest for the next three days. If he started working from 1st Monday of April 20XX, then for how many days did he work in that month? APRIL 20XX Sun Mon Tue Wed Thu Fri Sat 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 A. 25 B. 22 C. 23 24 Updated On Dec 3, 2024 Topic All topics Subject Smart Solutions Class Class 12 Answer Type Text solution:1 Are you ready to take control of your learning? Download Filo and start learning with your favorite tutors right away! Questions from top courses Algebra 1 Algebra 2 Geometry Pre Calculus Statistics Physics Chemistry Advanced Math AP Physics 2 Biology Smart Solutions College / University Explore Tutors by Cities Tutors in New York City Tutors in Chicago Tutors in San Diego Tutors in Los Angeles Tutors in Houston Tutors in Dallas Tutors in San Francisco Tutors in Philadelphia Tutors in San Antonio Tutors in Oklahoma City Tutors in Phoenix Tutors in Austin Tutors in San Jose Tutors in Boston Tutors in Seattle Tutors in Washington, D.C. World's only instant tutoring platform Connect to a tutor in 60 seconds, 24X7 27001 Filo is ISO 27001:2022 Certified Become a Tutor Instant Tutoring Scheduled Private Courses Explore Private Tutors Filo Instant Ask Button Instant tutoring API High Dosage Tutoring About Us Careers Contact Us Blog Knowledge Privacy Policy Terms and Conditions © Copyright Filo EdTech INC. 2025 This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.
189358
https://www.testing.com/tests/anti-dnase-b/
Published Time: 2021-06-28T00:00:00+00:00 Anti-DNase B - Testing.com Home Types of Tests Types of Tests At-Home Testing Allergy Testing Cancer Testing Cardiac and Cholesterol Testing Disease Detection and Monitoring Testing Drug Testing Genetic Testing Health and Wellness Testing Heavy Metals and Toxins Testing Hormone Testing Immunity Testing Infectious Disease Testing Inflammation and Autoimmune Testing Pregnancy and Fertility Testing STD Testing Thyroid Function Testing Vitamins and Nutritional Testing View All At-Home Tests At-Home Tests Best At-Home Sleep Apnea Tests Best At-Home Blood Tests Best At-Home Chlamydia Tests Best At-Home Drug Tests Best At-Home Flu Tests Best At-Home HIV Tests Best At-Home STD Tests Best At-Home UTI tests View All About About Medical Review Board Policies Editorial Policy Privacy Policy Terms of Use Accessibiliity Statement Contact Us FAQs Partners Resources Resources News Articles Glossary Conditions Find a Lab For Health Professionals Contact us Last modified on Jun 28, 2021 Anti-DNase B Also Known As: ADN-B, ADB, ADNase-B Board Approved At a Glance Why Get Tested? To help determine whether you have had a recent strep infection with the bacteria group A Streptococcus; to help diagnose complications resulting from a recent strep infection, such as rheumatic fever or glomerulonephritis, a form of kidney disease When To Get Tested? When you have symptoms such as fever, chest pain, fatigue and shortness of breath that suggest rheumatic fever, or symptoms such as accumulation of fluid(edema) and dark urine that are associated with glomerulonephritis, especially when you recently may have had a group A streptococcal infection that was not diagnosed and treated appropriately; may be done along with or following an antistreptolysin O (ASO) test Sample Required? A blood sample drawn from a vein Test Preparation Needed? None What is being tested? Anti-DNase B (antideoxyribonuclease-B antibody) is one of the most common of several antibodies that are produced by the body’s immune system in response to an infection with group A Streptococcus (strep infection). The anti-DNase B test measures the amount of this antibody in the blood. It is typically done with or following an antistreptolysin O (ASO) test, another test to detect antibody to a streptococcal antigen. Group A Streptococcus (Streptococcus pyogenes), is the bacterium that causes strep throat and a variety of other infections, including skin infections (pyoderma, impetigo, cellulitis). In most cases, strep infections are diagnosed and successfully treated with antibiotics. Sometimes a strep infection does not cause identifiable symptoms, goes untreated, or is treated ineffectively, and complications (sequelae), namely rheumatic fever and glomerulonephritis, can develop, especially in young children. These secondary conditions are uncommon in the U.S. because of routine strep testing and antibiotic treatment, but they do occur. Rheumatic fever is a delayed immune response in which the body produces antibodies directed against itself (autoimmune). This can cause serious damage to heart valves and lead to symptoms such as swelling and pain in several joints, heart inflammation (carditis), skin nodules, rapid, jerky movements (Sydenham chorea), and skin rash. Post-streptococcal glomerulonephritis, a condition affecting the kidneys, can develop a week or two after a strep infection. Your body produces antibodies to fight the strep infection, but these antibodies can eventually be deposited in the glomeruli, which are small, looping blood vessels within the kidneys that continually filter the blood. This can cause inflammation and kidney damage, affecting kidney function. Common Questions How is the test used? The anti-DNase B test may be used to confirm that a recent strep infection with group A Streptococcus is the cause glomerulonephritis(a form of kidney disease) or rheumatic fever. The test may be used in conjunction with an ASO, another test used to detect prior strep infections. Since complications resulting from strep infections have dropped in the U.S., so has use of the ASO test and anti-DNase B test. When is it ordered? The anti-DNase and ASO test are ordered when you have signs and symptoms that a health care practitioner suspects may be due to complications caused by a recent strep infection, including rheumatic fever and post-streptococcal glomerulonephritis. Some signs and symptoms of rheumatic fever may include: Fever Joint swelling and pain in more than one joint, especially in the ankles, knees, elbows and wrists, sometimes moving from one joint to another Small, painless nodules under the skin Rapid, jerky movements (Sydenham chorea) Skin rash Sometimes the heart can become inflamed (carditis); this may not produce any symptoms but also may lead to shortness of breath, heart palpitations, or chest pain Some signs and symptoms of glomerulonephritis may include: Fatigue, decreased energy Producing less urine Bloody urine Rash Joint pain Fluid accumulations and swelling (edema) High blood pressure However, these symptoms can be seen in other conditions. Anti-DNase B testing may be performed twice, with samples collected about two weeks apart, for acute and convalescent titers. This is done to determine if the antibody level is rising, falling, or remaining the same. What does the test result mean? Anti-DNase B and ASO test results may be interpreted together. Anti-DNase B and ASO antibodies are produced about a week to a month after a strep infection. The amount of anti-DNase B antibody (titer) peaks about 4 to 6 weeks after the illness and may remain elevated for several months. They typically remain elevated longer than ASO antibody titers. Negative anti-DNase B and ASO tests or very low titers means that it is unlikely you had a recent strep infection. This is especially true if a sample taken 10 to 14 days later is also negative. Your signs and symptoms are likely due to a cause other than a recent strep infection. A small percentage (10-15%) of those who have a complication related to a recent strep infection will not have an elevated ASO titer. This is especially true with glomerulonephritis that develops after a skin strep infection. These people may, however, have an elevated anti-DNase B titer and/or an elevation in another streptococcal antibody such as an elevated antihyaluronidase titer. Elevated or rising antibody titers of anti-DNase or ASO means that it is likely you had a recent strep infection. If your have signs and symptoms of rheumatic fever or glomerulonephritis, an elevated anti-DNase B and/or ASO titer can help confirm the diagnosis. Is there anything else I should know? The anti-DNase B test may be ordered along with another streptococcal antibody test, such as an antihyaluronidase, especially if the ASO test is negative. A small percentage (10-15%) of people with a post-streptococcal complication will not have an elevated ASO but may have an elevated anti-DNase B or antihyaluronidase titer. This is especially true with glomerulonephritis linked to a recent skin strep infection. Can an anti-DNase B or an ASO be used to diagnose strep throat? No. A rapid strep test and/or throat culture are the best methods to diagnose strep throat. It is important that strep throat be promptly identified and treated to avoid complications and to avoid passing the infection on to others. Can I develop rheumatic fever or glomerulonephritis at the same time as my strep throat? These complications typically develop after the initial strep infection resolves. There is a delay when signs and symptoms appear after the streptococcal infection, about 1-2 weeks for glomerulonephritis and about 2-3 weeks for rheumatic fever. If I am diagnosed with strep, will an anti-DNase B or ASO always be performed? No. In general, these tests are only performed when you have symptoms suggesting that a complication may have developed after a group A strep infection that was not diagnosed and treated appropriately. Most people do not experience these complications, so these tests are not routinely done. Like the rapid strep test, can the anti-DNase B test be performed in my doctor's office? Most doctors’ offices will not perform this test, and some laboratories may not offer it. Your blood will typically be sent to a reference laboratory for testing. Resources Antistreptolysin O (ASO)Learn More Strep Throat TestLearn More MedlinePlus Medical Encyclopedia: Poststreptococcal glomerulonephritis (GN)Learn More MedlinePlus Medical Encyclopedia: Rheumatic feverLearn More CDC: Rheumatic feverLearn More CDC: Post-Streptococcal Glomerulonephritis: All You Need to KnowLearn More Mayo Clinic: Rheumatic FeverLearn More Mayo Clinic: GlomerulonephritisLearn More Sources Sources Used in Current Review (March 8, 2018) Mayo Clinic. Glomerulonephritis. Available online at Accessed May 2019. (November 17, 2017) Mayo Clinic. Rheumatic Fever. Available online at Accessed May 2019. © 2015 National Kidney Foundation. What is Glomerulonephritis? Available online at Accessed May 2019. Group A Streptococcal Disease. ARUP Consult©. Available online at Accessed May 2019. © 1995–2019 Mayo Foundation for Medical Education and Research. Anti-DNase B Titer, Serum. Available online at Accessed May 2019. Sources Used in Previous Reviews (© 1995–2014). Anti-DNase B Titer, Serum. Mayo Clinic Mayo Medical Laboratories [On-line information]. Available online at Accessed January 2014. Delgado, J. and Fisher, M. (Updated 2013 September). Streptococcal Disease, Group A – Group A, Strep. ARUP Consult [On-line information]. Available online at Accessed January 2014. Robert J Meador, R. and Russell, I. J. (Updated 2013 December 13). Acute Rheumatic Fever. Medscape Reference [On-line information]. Available online at Accessed January 2014. Parmar, M. (Updated 2013 April 3). Acute Glomerulonephritis. Medscape Reference [On-line information]. Available online at Accessed January 2014. Khan, Z. and Salvaggio, M. (Updated 2013 September 30). Group A Streptococcal Infections. Medscape Reference [On-line information]. Available online at Accessed January 2014. Lewis, L. and Friedman, A. (Updated 2014 January 30). Impetigo. Medscape Reference [On-line information]. Available online at Accessed January 2014. Geetha, D. (Updated 2012 July 2). Poststreptococcal Glomerulonephritis. Medscape Reference [On-line information]. Available online at Accessed February 2014. Chin, T. et. al. (Updated 2012 May 30). Pediatric Rheumatic Heart Disease. Medscape Reference [On-line information]. Available online at Accessed February 2014. Salifu, M. and Delano, B. (Updated 2012 January 12). Diffuse Proliferative Glomerulonephritis. Medscape Reference [On-line information]. Available online at Accessed February 2014. Dugdale, D. (Updated 2011 June 9). Anti-DNase B. MedlinePlus Medical Encyclopedia [On-line information]. Available online at Accessed January 2014. Dugdale, D. (2011 September 20). Post-streptococcal glomerulonephritis (GN). MedlinePlus Medical Encyclopedia [On-line information]. Available online at Accessed February 2014. Chin, T. and Li, D. (Updated 2012 May 30). Pediatric Rheumatic Fever. Medscape Reference [On-line information]. Available online at Accessed February 2014. Pessler, F and Sherry, D. (Modified 2012 February). Rheumatic Fever. Merck Manual for Healthcare Professionals [On-line information]. Available online through Accessed January 2014. Pagana, K. D. & Pagana, T. J. (© 2011). Mosby’s Diagnostic and Laboratory Test Reference 10th Edition: Mosby, Inc., Saint Louis, MO. Pp 916-918. Wu, A. (© 2006). Tietz Clinical Guide to Laboratory Tests, 4th Edition: Saunders Elsevier, St. Louis, MO. Pp 1527. See More See Less Table of Contents At a Glance What is being tested? Common Questions Resources Sources GETTING AROUND Contact Us About Terms of Use Privacy Policy Editorial Policy Advertising Policy Accessibility Statement Medical Review Board Frequently Asked Questions Your Privacy Choices © 2025 TESTING.COM. ALL RIGHTS RESERVED. A ONECARE MEDIA COMPANY. Opt Out of Sell/Share/Targeted Advertising When you visit our website, we store cookies and similar technologies on your browser to collect information that might relate to you, your preferences or your device. As explained further in our Privacy Policy, we allow certain advertising and analytics partners to collect information from our site through cookies and similar technologies to deliver ads that are more relevant to you, and to assist us with advertising-related analytics (e.g., measuring and performance, optimizing our ad campaigns). This may be considered "selling", "sharing", or "targeted advertising" as defined by US privacy laws. To opt out of these activities where they apply to you, move the toggle below to the left and press "Confirm My Choices". Please note that your choice will apply only to your current browser/device. If you clear your cookies or your browser is set to do so, you must opt out again. If you want to opt out of other such disclosures that may be considered "selling", "sharing", or "targeted advertising" and are not based on cookies and similar technologies, please click Your Privacy Choices at the bottom of the page on the web and in the settings/profile menu on the app. Manage Consent Preferences Strictly Necessary Cookies Always Active These cookies are necessary for the website to function and cannot be switched off in our systems. They are usually only set in response to actions made by you which amount to a request for services, such as setting your privacy preferences, logging in or filling in forms. You can set your browser to block or alert you about these cookies, but some parts of the site will not then work. These cookies do not store any personally identifiable information. Selling/Sharing/Targeted Advertising [x] Selling/Sharing/Targeted Advertising The cookies and similar technologies described below collect information for analytics and to personalize your experience with targeted ads. You may exercise your right to opt out of selling/sharing/targeted advertising by using this toggle switch. If you opt out, we will not be able to offer you personalized content. If you have additional questions about your rights, please see our Privacy Policy. Targeting Cookies [x] Switch Label label These cookies may be set through our site by our advertising partners. They may be used by those companies to build a profile of your interests and show you relevant adverts on other sites. They do not store directly personal information, but are based on uniquely identifying your browser and internet device. If you do not allow these cookies, you will experience less targeted advertising. Performance Cookies [x] Switch Label label These cookies allow us to count visits and traffic sources so we can measure and improve the performance of our site. They help us to know which pages are the most and least popular and see how visitors move around the site. All information these cookies collect is aggregated and therefore anonymous. If you do not allow these cookies we will not know when you have visited our site, and will not be able to monitor its performance. Cookie List Clear [x] checkbox label label Apply Cancel Consent Leg.Interest [x] checkbox label label [x] checkbox label label [x] checkbox label label Confirm My Choices Opt Out of Sell/Share/Targeted Advertising When you visit our website, we store cookies and similar technologies on your browser to collect information that might relate to you, your preferences or your device. As explained further in our Privacy Policy, we allow certain advertising and analytics partners to collect information from our site through cookies and similar technologies to deliver ads that are more relevant to you, and to assist us with advertising-related analytics (e.g., measuring and performance, optimizing our ad campaigns). This may be considered "selling", "sharing", or "targeted advertising" as defined by US privacy laws. To opt out of these activities where they apply to you, move the toggle below to the left and press "Confirm My Choices". Please note that your choice will apply only to your current browser/device. If you clear your cookies or your browser is set to do so, you must opt out again. If you want to opt out of other such disclosures that may be considered "selling", "sharing", or "targeted advertising" and are not based on cookies and similar technologies, please click Your Privacy Choices at the bottom of the page on the web and in the settings/profile menu on the app. Manage Consent Preferences Strictly Necessary Cookies Always Active These cookies are necessary for the website to function and cannot be switched off in our systems. They are usually only set in response to actions made by you which amount to a request for services, such as setting your privacy preferences, logging in or filling in forms. You can set your browser to block or alert you about these cookies, but some parts of the site will not then work. These cookies do not store any personally identifiable information. Selling/Sharing/Targeted Advertising [x] Selling/Sharing/Targeted Advertising The cookies and similar technologies described below collect information for analytics and to personalize your experience with targeted ads. You may exercise your right to opt out of selling/sharing/targeted advertising by using this toggle switch. If you opt out, we will not be able to offer you personalized content. If you have additional questions about your rights, please see our Privacy Policy. Targeting Cookies [x] Switch Label label These cookies may be set through our site by our advertising partners. They may be used by those companies to build a profile of your interests and show you relevant adverts on other sites. They do not store directly personal information, but are based on uniquely identifying your browser and internet device. If you do not allow these cookies, you will experience less targeted advertising. Performance Cookies [x] Switch Label label These cookies allow us to count visits and traffic sources so we can measure and improve the performance of our site. They help us to know which pages are the most and least popular and see how visitors move around the site. All information these cookies collect is aggregated and therefore anonymous. If you do not allow these cookies we will not know when you have visited our site, and will not be able to monitor its performance. Performance Cookies Clear [x] checkbox label label Apply Cancel Consent Leg.Interest [x] checkbox label label [x] checkbox label label [x] checkbox label label Confirm My Choices Freestar.com
189359
https://jlmartin.ku.edu/courses/math725-S16/notesnew.pdf
Math 725, Spring 2016 Lecture Notes Contents 1. The Basics 2 1.1. Graphs 2 1.2. Isomorphisms and subgraphs 2 1.3. Some applications of graph theory 3 1.4. Some important graphs and basic constructions 4 1.5. Vertex degrees and some counting formulas 5 1.6. Paths, trails, walks and cycles 6 1.7. Trees and forests 8 1.8. Bipartite graphs 10 1.9. Eulerian Graphs 11 1.10. Matrices associated with graphs 12 2. Counting Spanning Trees (Not in Diestel) 14 2.1. Deletion and contraction 14 2.2. The Matrix-Tree Theorem 15 2.3. The Pr¨ ufer Code 18 2.4. MSTs and Kruskal’s algorithm 20 3. Matchings and Covers 24 3.1. Basic definitions 24 3.2. Equalities among matching and cover invariants 25 3.3. Augmenting paths 26 3.4. Hall’s Theorem and consequences 30 3.5. Weighted bipartite matching and the Hungarian Algorithm 31 3.6. Stable matchings 36 3.7. Nonbipartite matching 38 4. Connectivity, Cuts, and Flows 43 4.1. Vertex connectivity 43 4.2. Edge connectivity 45 4.3. The structure of 2-connected and 2-edge-connected graphs 47 4.4. Counting strong orientations 49 4.5. Menger implies edge-Menger 49 4.6. Network flows 51 4.7. The Ford-Fulkerson algorithm 54 4.8. Corollaries of the Max-Flow/Min-Cut Theorem 58 4.9. Path covers and Dilworth’s theorem 60 5. Coloring 62 5.1. The basics 62 5.2. Greedy coloring 63 5.3. Alternative formulations of coloring 64 5.4. The chromatic polynomial (not in Diestel) 64 5.5. The chromatic recurrence 66 5.6. Colorings and acyclic orientations 68 5.7. Perfect graphs 70 6. Planarity and Topological Graph Theory 72 6.1. Plane graphs, planar graphs, and Euler’s formula 72 6.2. Applications of Euler’s formula 75 6.3. Minors and topological minors 75 6.4. Kuratowski’s Theorem 78 6.5. The Five-Color Theorem 83 1 6.6. Planar duality 84 6.7. The genus of a graph 87 6.8. Heawood’s Theorem 90 7. The Tutte Polynomial 91 7.1. Definitions and examples 91 7.2. The chromatic polynomial from the Tutte polynomial 96 7.3. Edge activities 98 8. Probabilistic Methods in Graph Theory 99 8.1. A very brief taste of Ramsey theory 99 8.2. The reliability polynomial 99 8.3. Random graphs 100 8.4. Back to Ramsey theory 103 8.5. Random variables, expectation and Markov’s inequality 103 8.6. Graphs with high girth and chromatic number 104 8.7. Threshold Functions 106 8.8. Using the variance for lower thresholds 107 2 1. The Basics 1.1. Graphs. Definition 1.1. A graph G is a pair pV, Eq, where V is a (finite), nonempty set of vertices and E is a (finite) set of edges. Each edge e is given by an unordered pair of two (possibly equal) vertices v, w, called its endpoints. 5 3 1 4 2 3 1 4 2 G H Equivalent statements: ‚ v, w are the endpoints of e; or ‚ v, w are joined by the edge e; or ‚ e “ vw. Technically, this last notation should only be used when e is the only edge joining v and w, but we often ignore that requirement for simplicity. Note that e “ wv is equivalent. Sometimes, we don’t want to bother to give the edge e a name; it is enough to know that there exists some edge joining v and w. Then we might say that v, w are adjacent or are neighbors. (It’s tempting to say “connected” instead, but you should try to make a habit of resisting temptation, because that term properly means something else.) Graphs can have loops (edges that join a vertex to itself) and parallel edges (edges with the same pairs of endpoints). Sometimes we want to exclude these possibilities, often because they are irrelevant. A graph with no loops and no parallel edges is called simple. When studying graph theory, one quickly learns to be flexible about notation. For instance, when working with a single graph we want to use the concise symbols V and E for its vertex and edge sets; but if there are several different graphs around then it is clearer to write V pGq, EpHq, etc. 1.2. Isomorphisms and subgraphs. As in man fields of mathematics, one of our first orders of business is to say when two of the things we want to study are the same, and when one is a subthing of another thing. Definition 1.2. Let G, H be graphs. An isomorphism is a bijection f : V pGq Ñ V pHq such that for every v, w P V pGq, #tedges of G joining v, wu “ #tedges of H joining fpvq, fpwqu. “G – H” means G, H are isomorphic. Notice that this has nothing to do with what the graph looks like on the paper. A drawing of a graph is not the same as the graph itself! These three graphs are all isomorphic to each other; the red numbers indicate the isomorphism. 3 6 4 2 5 3 1 6 7 0 1 2 4 5 3 6 7 0 2 4 1 5 3 7 0 Think of an isomorphism as a relabeling, which doesn’t really change the underlying structure of the graph. Definition 1.3. An isomorphism invariant is a function ψ on graphs such that ψpGq “ ψpHq whenever G – H. (Equivalently, a function on equivalence classes of graphs.) For example, the number of vertices is an invariant, as is the number of edges — but not the number of crossings when you draw the graph (although the minimum number of crossings among all possible drawings is indeed an invariant). Nor is a property like “Every edge has one vertex labeled with an odd number and one vertex labeled with an even number,” since there’s nothing to prevent me from shuffling the numbers to make this false. On the other hand, “The graph can be labeled so that every edge has one odd and one even label” is an invariant. It is always possible to draw a given graph in lots of different ways, many geometrically inequivalent. It is crucial to remember that a graph is a combinatorial object, not a geometric one. That is, the structure of a graph really is given by its list of vertices, edges and incidences, not by a drawing composed of points and lines. Definition 1.4. Let G be a graph. A subgraph of G is a graph H such that V pHq Ď V pGq and EpHq Ď EpGq. For short we write H Ď G. Note that it is not true that for every X Ď V pGq and F Ď EpGq, the pair pX, Fq is a subgraph of G, because it might not even be a graph—it needs to satisfy the condition that every endpoint of an edge in F belongs to X. (You can’t have an edge dangling in the breeze — there needs to be a vertex on each end of it.) Every subset F Ď EpGq determines a subgraph whose vertex set is the set of all vertices that are endpoints of at least one edge in F. Also, every subset X Ď V pGq determines a subgraph GrXs, the induced subgraph, whose edge set is the set of all edges of G with both endpoints in X. Being an induced subgraph is a stronger property than being a subgraph. 1.3. Some applications of graph theory. Graph theory has about a zillion applications. Here are a few. Discrete optimization: a lot of discrete optimization problems can be modeled using graphs. For example, the TSP (traveling salesperson problem); the knapsack problem; matchings; cuts and flows. Discrete geometry and linear optimization: the vertices and edges of a polytope P form a graph called its 1-skeleton; when using the simplex method to solve a linear programming problem whose feasible region (i.e., the set of legal, although perhaps not optimal, solutions) is P, the 1-skeleton of P describes exactly the steps of the algorithm. 4 Algebra: the Cayley graph of a group G is a graph whose edges correspond to multiplication by one of a given set of generators; basic group-theoretic notions such as relations, conjugation, etc. now have natural descriptions in terms of the Cayley graph. Topology: you can study an infinite and therefore complicated topological space by replacing it with a finite simplicial complex (a generalized kind of graph) from which you can calculate properties of the original space; also, deep graph-theoretic concepts such as deletion/contraction often have topological analogues. Theoretical computer science: many fundamental constructions such as finite automata are essentially glo-rified graphs, as are data structures such as binary search trees. Chemistry: A molecule can be regarded as a graph in which vertices are atoms and edges are bonds. Amazingly, the chemical properties of a substance, such as its boiling point, can sometimes be predicted with great accuracy from the purely mathematical properties of the graph of the molecule! Biology: More complicated structures like proteins can be modeled as graphs. The theory of rigidity of graphs has been used to understand how proteins fold and unfold. Not to mention the wonderful applicability of graphs to all manner of subjects including forestry, communi-cations networks, efficient garbage collection, and evolutionary biology. 1.4. Some important graphs and basic constructions. The path Pn (Diestel: P n) has n vertices and n´1 edges, connected sequentially. The cycle Cn (Diestel: Cn)has n vertices and n edges and can be drawn as a polygon. 3 P 1 P P 2 4 C6 C C C 2 1 The complete graph Kn (Diestel: Kn) has n vertices and one edge between each pair of vertices. Thus there are n 2 ˘ “ npn´1q 2 edges in total. Often we assume that the vertex set is rns “ t1, 2, . . . , nu. (This notation is standard in combinatorics.) A complete graph is also called a clique, particularly when it occurs inside another graph. The complete bipartite graph Kp,q has p q vertices, with p of them painted red and q painted blue, and an edge between each pair of differently colored vertices, for a total of pq edges. K 5 K4,2 2,2 3 = K3 P 2 = K2 = K1,1 C = K 4 C 5 The empty graph or ¯ Kn consists of n vertices and no edges. A copy of ¯ Kn appearing as an induced subgraph of a graph G is the same as a set of vertices of G of which no two are adjacent. Such a set is called a coclique (or independent set or stable set). A few operations on graphs. ‚ If G is a simple graph, its complement ¯ G is the graph obtained by toggling adjacency and non-adjacency. ‚ The underlying simple graph Gs of any graph G is obtained by deleting all loops and all but one element of each parallel class of edges. Note that the connectivity relation on Gs is the same as that on G. ‚ The disjoint union G H is the union of G and H, assuming that the vertex sets are disjoint. For example, ¯ Kn ¯ Km “ Kmn and Km,n “ Km Kn. ‚ The join G ˚ H also has vertex set V pGq ¨ YV pHq, but this time we add every possible edge between a vertex of G and a vertex of H. 1.5. Vertex degrees and some counting formulas. The number of vertices of a graph G is its order, often written npGq. The number of edges is its size, written epGq. Often when we are talking about a single graph G, we will just write n and e. Diestel uses |G| for the order and }G} for the size. Definition 1.5. Let G “ pV, Eq be a graph. The degree of a vertex v in G, written dpvq or dGpvq, is the number of edges of G having v as an endpoint (counting loops twice). The minimum and maximum degrees of a vertex in G are written δpGq and ∆pGq (or δ and ∆). Proposition 1.6 (Degree-Sum Formula / Handshaking Theorem). For every graph G, ÿ vPV pGq dpvq “ 2epGq. Proof. Each edge contributes 2 to each side of the equation. □ Corollary 1.7. Every graph has an even number of vertices of odd degree. Corollary 1.8. For every vertex v, δpGq ď dpvq ď ∆pGq, so δ ď 2e n ď ∆. Definition 1.9. A graph G is d-regular if every vertex has degree d. In this case equality holds in Corollary 1.8. Corollary 1.10. There are no regular graphs of odd degree and odd order. Example 1.11. The cycle Cn is 2-regular and the clique Kn is pn ´ 1q-regular. An icosahedron has 12 vertices and is 5-regular, so e “ dn{2 “ 5 ¨ 12{2 “ 30. Example 1.12. The n-dimensional cube or hypercube Qn is defined as follows. Let V “ 2rns be the power set of rns (so in particular |V | “ 2n), and let E “ tST | |S△T| “ 1u, where △denotes symmetric difference. This graph is called the n-dimensional cube or hypercube Qn. Q1 Q2 Q0 Q3                               6 Note that |V pQnq| “ 2n and it is regular of degree n (why?). Therefore, |EpQnq| “ n2n´1. Equivalently, you can regard the vertices of Qn as bit strings of length n, with two vertices adjacent if they agree in n ´ 1 places. These two descriptions are isomorphic via associating a bit string pb1, . . . , bnq with the set ti P rns | bi “ 1u Ď rns. 1.6. Paths, trails, walks and cycles. Definition 1.13. Let x, y P V pGq. A x, y-walk in G is an alternating sequence of vertices and edges x “ v0, e0, v1, e1, . . . , vn´1, en´1, y “ vn where vi, vi1 are the endpoints of ei for all i. The length of the walk is the number of edges, namely n. The vertices x, y are the endpoints; the other vertices are internal to the walk. The walk is trivial if n “ 0. It’s not always necessary to specify all this data; e.g., we could just give the starting vertex and a sequence of edges. Or, if G has no parallel edges, we could just give the sequence of vertices. Often we don’t care about what the internal vertices are — in this case we can write just xWy (where technically W stands for e0, v1, . . . , vn´1, en´1). This makes it easy to concatenate walks: if xWy and yW 1z are walks, then so is xWyW 1z. We’ll write ℓpWq for the length of W. Definition 1.14. A walk is closed if v0 “ vn. A trail is a walk with no repeated edges. A path is a walk with no repeated vertices. A cycle is a closed path. These definitions of “path” and ‘cycle” are consistent with the previous ones. A path in G of length n is the same thing as a subgraph of G isomorphic to Pn1, and a cycle of length n is just a subgraph of G isomorphic to Cn. Paths are the nicest kind of walks. Frequently, we are in a situation where we know how to walk from u to v, but what we really want is a u, v-path. Fortunately, if a walk is not a path, then it must contain some redundancy which can be eliminated, and repeating this process will eventually yield a path. To be precise: Proposition 1.15. If G has an x, y-walk, then it has an x, y-path. Proof. Let xWy be a walk. If some vertex z occurs more than once, then xWy has the form xW 1zW 2zW 3y, where W 1 and W 3 may be trivial, but W 2 is not. But then xW 1zW 3y is a strictly shorter x, y-walk (since its length is ℓpWq´ℓpW 2q). Keep repeating this process until no further shortening is possible, which means that the walk is a path. □ Technically, the proof of Lemma 1.2.5 is an inductive argument, but I have phrased it instead as a recursive algorithm (which is really the same thing). The proof implies that every minimal-length walk is in fact a path. Definition 1.16. Two vertices of G are connected if there is a path in G between them (equivalently, a walk). The graph G is connected if every pair of vertices u, v is connected. The (connected) components of G are its maximal connected subgraphs. The number of components is denoted cpGq. Note that any two adjacent vertices are connected, but not every two connected vertices are adjacent. Proposition 1.17. The relation “u is connected to v” is an equivalence relation on V pGq, whose equivalence classes are the vertex sets of the connected components of G. Proof. Connectedness is reflexive (consider the trivial walk), symmetric (walks can be reversed), and tran-sitive (walks can be concatenated). □ 7 Proposition 1.18. Let G be connected on n vertices. Then the vertices can be labeled v1, . . . , vn so that every induced subgraph Gj :“ Grv1, . . . , vjs is connected, for 1 ď j ď n. In addition, v1 can be chosen arbitrarily. Proof. Choose v1 arbitrarily. Clearly G1 – K1 is connected. To construct Gj1 from Gj, choose any vertex x R tv1, . . . , vju and find a path from v1 to x. Take vj1 to be the first vertex on this path not in Gj. □ Again, I have chosen to express the proof as an algorithm rather than a formal proof by induction. Corollary 1.19. If G is connected, then epGq ě npGq ´ 1. More generally, cpGq ě npGq ´ epGq for all G. Proposition 1.20. Let a P EpGq, and let G ´ a denote the graph obtained by removing a. If a belongs to a cycle in G, then cpG ´ aq “ cpGq. Otherwise, cpG ´ aq “ cpGq 1. In the latter case, a is called a bridge or cut-edge or isthmus or coloop of G. Proof. First, it is clear that every two vertices connected in G ´ a are connected in G, so cpGq ď cpG ´ aq. Suppose that a belongs to a cycle, and let P be the path that constitutes the rest of the cycle. Then any two vertices that are connected in G are connected in G ´ a, because a can be replaced with P in any walk. Therefore the connectivity relations on G and G ´ a are the same, and cpGq “ cpG ´ aq. Now suppose that a does not belong to any cycle. Then its two endpoints cannot be connected by any path P Ď G ´ a, for then P Y a would be a cycle in G containing a. So cpGq ą cpG ´ aq. On the other hand, adding a to G ´ a can only join two components into one. So cpGq “ cpG ´ aq 1. □ By the way, a cut-vertex is a vertex v such that cpG ´ vq ą cpGq. (Synonyms: cutpoint, articulation point.) Here G ´ v means the graph obtained by deleting all v and all its incident edges; equivalently, G ´ v “ GrV pGqzvs. Example 1.21. In the connected graph G on the left below, q, r and s are cut-vertices; the others aren’t. Note that cpG ´ qq “ cpG ´ sq “ 2 but cpG ´ rq “ 3. The bridges are pq, qr, rv. We have cpG ´ aq “ 2 for each bridge a. p q r s v t u Note that a loop cannot be a bridge, nor can any edge that has another parallel edge. Example 1.22. A cycle has no cut-vertices or bridges. On the other hand, every internal vertex of a path (but not either of the endpoints) is a cut-vertex, and every edge is a bridge. 8 1.7. Trees and forests. Definition 1.23. A graph is acyclic, or a forest, if it has no cycles. By Proposition 1.20, this is equivalent to the condition that every edge is a bridge. A connected forest is a tree. Proposition 1.24. A graph G is acyclic if and only if cpGq “ npGq ´ epGq. In particular, every tree T has npTq ´ epTq “ 1. Proof. Start with the vertex set V pGq and no edges. This is certainly acyclic. and c “ n and e “ 0. Now add edges one by one. Each time you do so, e increases by 1 and c might or might not decrease by 1. If c ever stays constant, you just created a cycle. Otherwise, every edge is a bridge, which means that you didn’t create a cycle. □ The following corollary will be very useful (although not immediately). Corollary 1.25. Every tree T with n ě 2 vertices has exactly at least two leaves (vertices of degree 1). Proof. Handshaking says that ÿ vPV pT q dT pvq “ 2epTq “ 2n ´ 2. If a sum of n positive integers equals 2n ´ 2, then at least two of the summands must equal 1. □ Here are the three isomorphism classes of trees on 5 vertices: Theorem 1.26. (Characterizations of trees; Diestel Thm. 1.5.1) Let G “ pV, Eq with n “ |V |, e “ |E|. TFAE: (1) G is a tree (i.e., connected and acyclic). (2) G is connected and e “ n ´ 1. (2’) G is minimally connected, i.e., G ´ a is disconnected for every a P E. (3) G is acyclic and e “ n ´ 1. (3’) G is maximally acyclic, i.e., G xy has a cycle for every nonadjacent x, y P V . (4) G has no loops, and for every v, w P V pGq, there is exactly one v, w-path in G. Proof. We’ve already proved that G is acyclic if and only if c “ n ´ e. It’s actually easy to prove from this that (1), (2), (3) are equivalent: ‚ If G is acyclic and connected, then c “ 1 “ n ´ e. ‚ If G is acyclic and e “ n ´ 1, then c “ 1, i.e., G is connected. ‚ If G is connected and e “ n ´ 1, then in fact c “ 1 “ n ´ e, which means that G is acyclic. The proof of (4 ð ñ 1) is left as an exercise. □ Definition 1.27. Let G be connected. A spanning tree is a tree T Ă G with V pTq “ V pGq. (More generally, a spanning subgraph of G is a subgraph with the same vertex set, i.e., a subgraph obtained by deleting edges but not deleting any vertices.) 9 Every connected graph has at least one spanning tree. For example, you can find one by labeling the vertices as in Prop. 1.18 and keeping only the n ´ 1 edges that join vj1 to a previous vertex, for each j P rn ´ 1s. Or, you can repeatedly delete non-bridge edges until only a tree is left. Some natural questions: (1) How many spanning trees does a given graph have? This number τpGq is an interesting measure of the complexity of the graph, and for many graphs there are amazing formulas for τpGq. (2) How can you find the best spanning tree? Suppose each edge has a particular cost and you want to find the spanning tree that minimizes total cost. We will come back to these things. Frequently we want to think of one of the vertices r of a tree T as the root. In this case there is a partial order on vertices of the tree: x ě y if y lies on the unique xPr in T (i.e., xPr factors as xP 1yP 2r). For every x ‰ r, the vertex adjacent to x in xPr is called its parent, denoted ppxq. Theorem 1.28. Let G be a connected simple graph and let r P V pGq. There exists a spanning tree T with the property that for every x P V pGq, the rPx in T is of minimum length over all r, x-paths in G. (Such a tree is called normal with respect to r, or a breadth-first search tree.) Proof. Here is some notation that will be useful. For each x P V pGq, let Npxq denote the set of all vertices adjacent to x, and let Nrxs “ Npxq Y txu. (The letter N stands for “neighborhood”; the parentheses and square brackets are intended to suggest open and closed neighborhoods respectively.) In addition, define N 0rxs “ txu, N 2rxs “ ď yPNrxs Nrys N 3rxs “ ď yPN 2rxs Nrys ¨ ¨ ¨ N krxs “ ď yPN k´1rxs Nrys ¨ ¨ ¨ Equivalently, N krxs is the set of vertices that are at distance at most k from r (i.e., are connected to r by a path of length at most k). Since G is connected and finite, we have N krxs “ V pGq for sufficiently large k. Now, construct a spanning tree T with root r by the following algorithm. 0. Start by putting r in T. 1. For every x P Nrrsztru, add the edge rx. (So ppxq “ r for all such x.) 2. Each x P N 2rrszNrrs has a neighbor y in Nrrs. Add the edge xy, so that ppxq “ y. 3. Each x P N 3rrszN 2rrs has a neighbor y in N 2rrs. Add the edge xy, so that ppxq “ y. . . . By induction on k, the vertices added at step k are exactly those at distance k from r. In other words, T is normal with respect to r. □ Some remarks: (1) This definition of distance in fact makes G into a metric space. 10 (2) This algorithm can be souped up by assigning every edge e a positive real number ℓpeq (think of this as “length” in a metric sense), and then defining the distance between two vertices to be the shortest possible total length of a path between them. In this form it is known as Dijkstra’s algorithm, and is fundamental in computer science and discrete optimization. It is a theoretically efficient algorithm in the sense that its run time is polynomial in the numbers of vertices and edges. 1.8. Bipartite graphs. Definition 1.29. A graph G is bipartite if V pGq “ X ¨ YY . where X, Y are cocliques. That is, every edge has one endpoint in each of X, Y . The pair X, Y is called a bipartition and the sets X, Y themselves are partite sets or color classes. Also, we might say for short that G is an X, Y -bigraph. More generally, a graph G is k-partite if its vertex set is the disjoint union of k cocliques (also called partite sets). ‚ A graph is bipartite if and only if every one of its components is bipartite. ‚ A bipartite graph can’t contain any loops (parallel edges are OK). ‚ Any subgraph of a bipartite graph is bipartite. ‚ Even cycles are bipartite but odd cycles are not. ‚ Qn is bipartite. Remember that the edges of Qn are pairs S, T P 2rns with |S△T| “ 1. For this to happen, one of S, T must have even cardinality and the other odd, so parity gives a bipartition. Proposition 1.30 (Bipartite Handshaking). Let G be an X, Y -bigraph. Then ÿ vPX dpvq “ ÿ vPY dpvq “ epGq. Corollary 1.31. If G is a regular X, Y -bigraph, then |X| “ |Y | (and in particular |V pGq| is even). Bipartite graphs arise in lots of real-world applications, notably matching problems: X “ tworkers wu, Y “ tshifts su, E “ tpw, sq : worker w is able to work shift su X “ tjob applicants au, Y “ tavailable jobs pu, E “ tpa, pq : a is qualified for job pu. V “ tpeopleu, E “ tbrother-sister pairsu: X = women, Y = men Lemma 1.32. Every closed odd walk contains an odd cycle. Proof. Suppose that we have a closed odd walk that is not itself an odd cycle. Then it has some repeated vertex, so it has the form xWxW 1x. But ℓpWq ℓpW 1q is odd, so exactly one of W or W 1 is odd (say W), which means that xWx is a shorter closed odd walk. Repeating this, we eventually obtain an odd cycle. □ Proposition 1.33. A graph is bipartite if and only if it contains no odd cycle. Proof. Odd cycles are non-bipartite, so no bipartite graph can contain an odd cycle. Now suppose that G contains no odd cycle. We may as well assume that G is connected. Fix a vertex v and define X “ tx P V pGq | G has an even path vPxu, Y “ ty P V pGq | G has an odd path vP 1yu. 11 Then X Y Y “ V pGq because G is connected. If x P X X Y , then we have a closed walk vPxP 1v with ℓpPq even and ℓpP 1q odd, but by the Lemma, this means that G has an odd cycle, which is impossible. Hence V pGq “ X ¨ YY . Suppose that two vertices x, x1 P X are adjacent via an edge a. Then again we have a closed walk vPxax1P 1v, of odd length ℓpPq ℓpP 1q 1 (since P, P 1 are even), which again is a contradiction. Hence X is a coclique. The same argument implies that Y is a coclique (here P, P 1 are both odd so again ℓpPqℓpP 1q1 is odd). □ Note that this proof is essentially constructive: if G is bipartite, you can construct it by picking a starting vertex, coloring it blue, and walking around the graph, toggling your color between blue and red at every step. (For that matter, you can test bipartiteness easily by doing exactly this and seeing if it works.) One way to think about this: Odd cycles are the minimal obstructions to being bipartite. Corollary 1.34. Acyclic graphs are bipartite. Proof. If you have no cycles, you certainly have no odd cycles! □ 1.9. Eulerian Graphs. K¨ onigsberg Bridge Problem (Euler, 1737) Definition 1.35. A circuit (or tour) in a graph is a closed trail, i.e., a walk that ends where it started and does not repeat every edge. An Euler circuit of a graph is a circuit using every edge. A graph is Eulerian if it has an Euler circuit. Example: K4 is not Eulerian. K5 is. — Removing or adding loops does not affect whether or not a graph is Eulerian. — If G is Eulerian and disconnected, then it has at most one nontrivial component. — So from now on, suppose that G is loopless and connected. Theorem 1.36. A connected graph G is Eulerian if and only if it is an even graph, i.e., every vertex has even degree. Proof. ( ù ñ ) Let W be an Euler tour. Then W leaves and enters each vertex the same number of times, and since it traverses each edge exactly once, every vertex must therefore have even degree. 12 ( ð ù ) Let W “ x ¨ ¨ ¨ y be a trail of greatest possible length. I claim that W is in fact a circuit, i.e., x “ y. Indeed, if x ‰ y, then the number of edges of W incident to y is odd, but by the assumption that G is even, there must be at least one edge of GzW incident to y, which contradicts the assumption that W is of maximum length. Now, suppose that W is not an Euler tour. Then there is some vertex v that has at least one edge in W and at least one edge e “ vu not in W. Say W “ xW 1vW 2x; then uevxW 2xW 1v is a trail — but it is longer than W which is a contradiction. □ Another method of proof is a little more constructive. By induction on the number of edges, every even graph decomposes as a (edge-)disjoint union of cycles (since erasing the edges in a cycle preserves evenness), and the cycles can be glued together to producer an Euler tour. There is a simple method, called Fleury’s algorithm, for constructing an Euler tour in an even connected graph. Start at any vertex and start taking a walk, erasing each edge after you traverse it. There is only one rule: cross a bridge only if it is the only option open to you. 1.10. Matrices associated with graphs. [One note: Prove that every tree has at least two leaves. I think I forgot this last time.] Let G be loopless, V pGq “ tv1, . . . , vnu, and EpGq “ te1, . . . , eru. Definition 1.37. The adjacency matrix is the n ˆ n matrix A “ ApGq “ raijs, where aij is the number of edges joining vertices i and j. Note that AT “ A. Fix an orientation on EpGq. That is, for each edge, call one of its vertices the head and the other the tail. What we have is now a directed graph, or digraph, which can be drawn by replacing each edge with an arrow pointing from the tail to the head. Definition 1.38. The incidence matrix is the n ˆ r matrix B “ BpGq “ rbves, where bve “ $ ’ & ’ % 1 if v “ headpeq, ´1 if v “ tailpeq, 0 otherwise. Example 1.39. Let G be the graph as follows (actually, it’s the K¨ onigsberg bridge graph): c b d a Then ApGq “ » — — – 0 2 0 1 2 0 2 1 0 2 0 1 1 1 1 0 fi ffi ffi fl, BpGq “ » — — – 1 1 1 0 0 0 0 ´1 ´1 0 1 ´1 ´1 0 0 0 0 ´1 1 0 ´1 0 0 ´1 0 0 1 1 fi ffi ffi fl, Warning: Diestel defines these matrices A and B as living over Z2 instead of R. This doesn’t affect the behavior of A or B appreciably, and it has the advantage of making the orientation irrelevant (since 1 “ ´1 mod 2). However, in order to work with L you really have to work over R. 13 Theorem 1.40. Let G “ pV, Eq be a connected graph, H “ pV, Sq a spanning subgraph, and let BpHq “ tBe | e P Hu be the corresponding set of columns of B “ BpGq. ‚ H acyclic ð ñ BpHq linearly independent; ‚ H connected ð ñ BpHq spans the column space of B; ‚ H is a spanning tree ð ñ BpHq is a column basis for B; Proof. First, notice that all of these properties are independent of the choice of orientation (since reori-enting simply multiplies one or more columns by ´1 without changing which sets of columns are linearly (in)dependent). Suppose that C is a cycle. Traverse the cycle starting at any point, and keep track of whether you walk forward or backward (i.e., with or against the arrows). Then ÿ forward ePC Be ´ ÿ backward ePC Be “ 0. For example, consider the 6-cycle shown below, traversed clockwise. + + + + − − a b c d f e ¨ ˚ ˚ ˚ ˚ ˚ ˚ ˝ab ´bc cd ´deef fa ´1 0 0 0 0 1 1 1 0 0 0 0 0 ´1 ´1 0 0 0 0 0 1 1 0 0 0 0 0 ´1 ´1 0 0 0 0 0 1 ´1 ˛ ‹ ‹ ‹ ‹ ‹ ‹ ‚ Now, suppose that H has only one edge e incident to some vertex x. Then Be is the only column in BpHq to have a nonzero entry in the x row. Therefore, it is linearly independent in BpHq. By induction, it follows that if if H is acyclic, then BpHq is a linearly independent set (remove leaves one by one). In particular, if H is a spanning tree then the rank of BpHq is n ´ 1. On the other hand, the rank of the entire incidence matrix B is no more than n ´ 1, since it has n rows and they are not linearly independent — their sum is zero. Hence every spanning tree corresponds to a column basis, and any edge set containing a spanning tree spans the column space. Suppose that H has c components H1, . . . , Hc. Then the column spaces of BpH1q, . . . , BpHcq are disjoint, so rank BpHq “ c ÿ i“1 rank BpHiq “ c ÿ i“1 npHiq ´ 1 “ n ´ c (since each BpHiq is connected). We have seen that rank B “ n ´ 1, so BpHq is a spanning set1 if and only if H is connected. □ 1Unfortunately, the term “spanning” can cause problems. In the graph theory context it refers to a subgraph that contains all the vertices of its parent graph; in the linear algebra it refers to a collection of vectors that span a subspace. Be careful. 14 2. Counting Spanning Trees (Not in Diestel) 2.1. Deletion and contraction. Let T pGq denote the set of spanning trees of G, and let τpGq “ |T pGq| be the number of spanning trees. G τpGq any tree 1 Cn n K3 3 K4 16 K2,3 12 K3,3 81 Q3 384 Pete 2000 Definition 2.1. Let G be a graph and e P EpGq an edge. The deletion G ´ e is the graph obtained by erasing e, leaving its endpoints (and everything else) intact. The contraction G{e is obtained by erasing e and merging its endpoints into a single vertex. (Contraction is not defined if e is a loop.) Note: npG ´ eq “ npGq, npG{eq “ npGq ´ 1, epG ´ eq “ epGq ´ 1, epG{eq “ epGq ´ 1. Two kinds of edges are special: ‚ If e is a loop, then it can’t belong to any spanning tree. So T pGq “ T pG ´ eq and τpGq “ τpG ´ eq. ‚ If e is a bridge, then it belongs to every spanning tree (since you can’t have a connected spanning subgraph without it). In fact τpGq “ τpG{eq. By contrast, each “ordinary” edge (one that is neither a loop nor a bridge) belongs to at least one spanning tree, but not to all spanning trees. More specifically: Theorem 2.2. If e P EpGq is not a loop, then τpGq “ τpG ´ eq τpG{eq. e Proof. We will find bijections tT P T pGq : e R EpTqu Ñ T pG ´ eq and tT P T pGq : e P EpTqu Ñ T pG{eq. The first bijection is the easy one: a spanning tree of G not containing e is the same thing as a spanning tree of G ´ e. 15 For the second bijection, if T is a spanning tree of G containing e, then T 1 “ T{e is a spanning tree of G{e. Indeed, T 1 is connected because T is, and epT 1q “ epTq ´ 1 “ pnpGq ´ 1q ´ 1 “ npG{eq ´ 1. On the other hand, given any spanning tree T 1 P T pG{eq, the corresponding edges of T, together with e itself, form a spanning tree of G. □ Remark 2.3. The recurrence even works when e is a bridge (because G ´ e is disconnected, hence has zero spanning trees) or even if e is a loop (well, in a silly way: G{e is undefined, so it isn’t even a graph and then doesn’t have any spanning trees). Example: By repeatedly applying deletion/contraction, we can calculate τpGq of any graph. Here’s the calculation for the “diamond graph” obtained by removing an edge from K4. d a b c e b G − a G−a / b G / a − b G / a G 3 3 3 e (b is a bridge) 5 8 2 2 G / a / b − e G / a / b e (e is a loop) τ = (G) in red ~ C 3 = ~ C 3 = ~ C 2 We start by applying deletion/contraction to edge a. On the left side, G ´ a has a bridge b, and contracting it gives a 3-cycle, which we know has τ “ 3. On the right side, G{a has neither a loop nor a bridge, so we recurse again, deleting and contracting edge b. The deletion is another 3-cycle, and contracting gives a 2-cycle plus a loop. So τpG{aq “ τpG{a´bqτpG{a{bq “ 32 “ 5, and then τpGq “ τpG´aqτpG{aq “ 35 “ 8. The bad news is that computing τpGq by deletion/contraction takes exponential time, essentially Op2epGqq, because each instance of the recursion contributes a factor of 2. So this is not a good way to compute τpGq in practice, although there are some families of graphs for which you can find interesting recurrences for τpGq (see problem set). In the next section we will see a computationally efficient way of calculating τpGq: the Matrix-Tree Theorem, which exploits linear algebra. 2.2. The Matrix-Tree Theorem. Definition 2.4. The Laplacian matrix is the n ˆ n matrix L “ LpGq “ BBT where B is the incidence matrix. Note that L “ D ´ A, where D is the diagonal matrix of vertex degrees. Also, the choice of orientation does not affect the Laplacian. 16 Example: The graph G “ K4 ´ e has A “ » — — – 3 ´1 ´1 ´1 ´1 3 ´1 ´1 ´1 ´1 2 0 ´1 ´1 0 2 fi ffi ffi fl, B “ » — — – 1 1 1 0 0 ´1 0 0 1 1 0 ´1 0 ´1 0 0 0 ´1 0 ´1 fi ffi ffi fl, L “ » — — – 3 ´1 ´1 ´1 ´1 3 ´1 ´1 ´1 ´1 2 0 ´1 ´1 0 2 fi ffi ffi fl. For the K¨ onigsberg bridge graph, A, B, L are as follows: c b d a A “ » — — – 0 2 0 1 2 0 2 1 0 2 0 1 1 1 1 0 fi ffi ffi fl, B “ » — — – 1 1 1 0 0 0 0 ´1 ´1 0 1 ´1 ´1 0 0 0 0 ´1 1 0 ´1 0 0 ´1 0 0 1 1 fi ffi ffi fl, LpGq “ » — — – 3 ´2 0 ´1 ´2 5 ´2 ´1 0 ´2 3 ´1 ´1 ´1 ´1 3 fi ffi ffi fl. In general, for any loopless graph G, LpGq “ BBT is a symmetric, positive-semi-definite matrix of rank npGq ´ cpGq, with entries as follows: ℓij “ dot product of ith and jth rows of B “ $ ’ & ’ % dGpiq if i “ j, ´mij if i, j share mij edges, 0 otherwise. Theorem 2.5 (Matrix-Tree Theorem, Kirchhoff1845). Let n ě 2, let G be a loopless graph on vertex set rns, let i, j P V pGq, and let Li,j be the “reduced Laplacian” matrix obtained by deleting the ith row and jth column of LpGq. Then: (1) τpGq “ p´1qij det Li,j. (In particular, if i “ j, then the sign is1.) (2) Let the nonzero eigenvalues of G be λ1, . . . , λn´1. Then τpGq “ λ1 ¨ ¨ ¨ λn´1 n . Example: For G “ K4 ´ e, we have L1,1pGq “ » – 3 ´1 ´1 ´1 2 0 ´1 0 2 fi fl and det L1,1 “ 8 “ τpGq, which is the answer we had gotten by deletion-contraction. OTOH, if you go ahead and diagonalize L (which you always can, because it’s symmetric), you get » — — – 4 0 0 0 0 4 0 0 0 0 2 0 0 0 0 0 fi ffi ffi fl i.e., the eigenvalues are 4,4,2,0, which says that the number of spanning trees is 4 ˚ 4 ˚ 2{4 “ 8. Note: The number of nonzero eigenvalues, counting multiplicities, is always n´1 (provided G is connected), and they are always positive real numbers (because LpGq is symmetric). But they don’t have to be integers. 17 Example 2.6. Cayley’s formula says that τpKnq “ nn´2. This can be proven by the Matrix-Tree Theorem. L “ LpKnq “ » — — — – n ´ 1 ´1 ¨ ¨ ¨ ´1 ´1 n ´ 1 ¨ ¨ ¨ ´1 . . . . . . . . . ´1 ´1 ¨ ¨ ¨ n ´ 1 fi ffi ffi ffi fl nˆn and Ln,n looks the same, only it’s an pn ´ 1q ˆ pn ´ 1q matrix. What is det Ln,n? Don’t expand the determinant! Instead, think about what the eigenvectors might look like. The all-1’s vector is an eigenvector, with eigenvalue 1. And any vector with a 1 in one place and a ´1 in another place is an eigenvector with eigenvalue pn ´ 1q ´ p´1q “ n. These span an eigenspace of dimension n ´ 2. Hence, by the MTT, τpKnq “ det Ln,n “ nn´2. There are other graphs for which all eigenvalues are integers, such as Qn and Km,n. But it’s a rare property. Proof #1 of Matrix-Tree Theorem (i). Proceed by double induction on n “ npGq and r “ epGq. If n “ 2, then G consists of r parallel edges, any one of which is a spanning tree. So τpGq “ r. Meanwhile, L “ „ r ´r ´r r  , L1,1 “ L2,2 “ rrs, L1,2 “ L2,1 “ r´rs. If r “ 0, then G is not connected, so τpGq “ 0, and meanwhile LpGq is the zero matrix. Now suppose that n ą 2 and r ą 0, and that the MTT holds for all graphs with either fewer vertices, or n vertices and fewer edges. Let e P EpGq; assume WLOG that its endpoints are 1, n. Note that LpGq and LpG ´ eq are almost the same: ℓG´e ij “ “ LpG ´ eq ‰ ij “ $ ’ & ’ % ℓG ij ´ 1 if i “ j “ 1 or i “ j “ n, ℓG ij 1 if ti, ju “ t1, nu, ℓG ij otherwise. When we delete the nth row and column, we obtain reduced Laplacians that differ only in one entry, namely ℓG 1,1 “ ℓG´e 1,1 1. Therefore, if we evaluate each of det ˜ LpGq and det ˜ LpG´eq by expanding on the top row, then the calculations are almost the same; the difference is det ˜ LpGq ´ det ˜ LpG ´ eq “ det » — — — – ℓG 2,2 ¨ ¨ ¨ ℓG 2,n´1 . . . . . . ℓG n´1,2 ¨ ¨ ¨ ℓG n´1,n´1 fi ffi ffi ffi fl. But that matrix is precisely the reduced Laplacian of G{e obtained by deleting the row and column indexed by the merged vertex. (The degrees of nonmerged vertices aren’t affected by the contraction, nor are edges between two nonmerged vertices.) Thus det ˜ LpGq “ τpG ´ eq τpG{eq “ τpGq by induction and the deletion-contraction recurrence (Theorem 2.2). □ Proof #2 of Matrix-Tree Theorem (ii). This proof uses the Binet-Cauchy Theorem, a linear algebra fact that we will use as a black box. 18 The Binet-Cauchy Formula Let m ě p, A P Rpˆm, B P Rmˆp, so AB P Rpˆp. For S Ă rms, |S| “ p, let AS = p ˆ p submatrix of A with columns S BS = p ˆ p submatrix of B with rows S Then det AB “ ÿ S pdet ASqpdet BSq. Let N be the “reduced incidence matrix” formed by deleting a row from the signed incidence matrix M. Observe that NN T “ L1,1. Let S be a set of n´1 edges of G, and consider the corresponding columns of N. Note that S either contains a cycle or is a spanning tree. As noted before: ‚ If S contains a cycle, then the columns are linearly dependent. ‚ If S is acyclic (hence is a spanning tree), then every pn ´ 1q ˆ pn ´ 1q submatrix of N with columns S has determinant ˘1. Now we can apply Binet-Cauchy with p “ n ´ 1, m “ e, A “ N, B “ N T . det L1,1pGq “ det NN T “ ÿ SĎEpGq: |S|“n´1 pdet NSqpdet N T S q (Binet-Cauchy) “ ÿ S pdet NSq2 “ ÿ S # 1 if S is a tree 0 if it isn’t “ τpGq. □ 2.3. The Pr¨ ufer Code. Theorem 2.7. There is a bijection P : Tn Ñ rnsn´2 “ tpp1, . . . , pn´2q : pi P rnsu, called the Pr¨ ufer code, such that for every vertex v, degT pvq “ 1 #ti P rn ´ 2s : pi “ vu. Cayley’s formula τpKnq “ nn´2 is an immediate corollary, as is an even more refined count of trees called the Cayley-Pr¨ ufer formula. Here’s the idea: ‚ Peel offleaves, one by one, choosing the smallest available leaf each time. ‚ Keeping track of which leaf is deleted is not enough information to recover the tree; we need to keep track of the stem (unique neighbor of the deleted leaf) ‚ The list of stems is enough information to recover the tree. Here is a pseudocode algorithm for computing PpTq: Input: T P Tn Output: PpTq P rnsn´2 19 T0 :“ T for i from 1 to n ´ 2 do t yi := smallest leaf of Ti´1 pi := unique neighbor (“stem”) of yi Ti :“ Ti´1 ´ pi u PpTq “ pp1, . . . , pn´2q Here it is in Sage: def PruferCode(T): ## assume that T is a tree on 2 or more vertices U = deepcopy(T) P = [] while U.num_verts() > 2: Leaves = [v for v in U.vertices() if U.degree(v) == 1] y = min(Leaves) p = U.neighbors(y) P.append(p) U.delete_vertex(y) return P Before proving that this algorithm gives a bijection, let’s do an example. Let n “ 8 and let T be the tree shown.                   8 1 7 4 5 2 6 3 Step 1: Leaves: 2, 5, 6, 7. Delete y1 “ 2, write down ℓ1 “ 8. Step 2: Leaves: 5, 6, 7. Delete y2 “ 5, write down ℓ2 “ 4. Step 3: Leaves: 4, 6, 7. Delete y3 “ 4, write down ℓ3 “ 1. Step 4: Leaves: 1, 6, 7. Delete y4 “ 1, write down ℓ4 “ 8. Step 5: Leaves: 6, 7. Delete y5 “ 6, write down ℓ5 “ 3. Step 6: Leaves: 3, 7. Delete y6 “ 3, write down ℓ6 “ 8. There’s only one edge left, so we are done; the Pr¨ ufer code is (8, 4, 1, 8, 3, 8). Lemma 2.8. For all v P V pTq, we have degT pvq “ 1 #ti : pi “ vu. Proof. Every vertex v eventually becomes a leaf (either it is deleted, or it is one of the two remaining ones). To make v into a leaf, we need to remove degT pvq ´ 1 of its neighbors, so v will occur exactly that many times in PpTq. □ Proof of Theorem 2.7. Now suppose you are given the Pr¨ ufer code of a tree T P T8. I claim that you can reconstruct the leaves ℓ1, . . . , ℓn, hence T. We’ll do this with the running example PpTq “ pp1, . . . , p6q “ p8, 4, 1, 8, 3, 8q. 20 ‚ The first leaf deleted must have been ℓ1 “ 2, because it is the smallest vertex not in PpTq. I.e., ℓ1 “ min prnsztp1, p2, . . . , pn´2uq . ‚ The second leaf deleted must have been ℓ2 “ 5, because it is the smallest vertex that is not ℓ1 (hence is a vertex of T ´ ℓ1p1) and does not appear in tp2, . . . , pn´2u (hence is a leaf of T ´ ℓ1p1). That is, ℓ2 “ min prnsztℓ1, p2, . . . , pn´2uq . By the same reasoning, the third leaf deleted must have been ℓ3 “ min prnsztℓ1, ℓ2, p3, . . . , pn´2uq . and in general, for all i P rn ´ 2s, we have ℓi “ min prnsztℓ1, . . . , ℓi, pi1, . . . , pn´2uq . ‚ Thus ℓ1, . . . , ℓn´2 are all distinct vertices, and the edges ℓipi include all but one of the edges of T. The other edge is the one left when the algorithm finishes; its endpoints are the two vertices that were never deleted, i.e., the elements of rnsztℓ1, . . . , ℓn´2u. So we can recover T from PpTq. On the other hand, starting with an arbitrary sequence ppiq P rnsn´2 and constructing the sequence pℓiq yields a tree T such that PpTq “ ppiq, so we have a bijection. □ Corollary 2.9 (Cayley-Pr¨ ufer Formula). ÿ T PTn ź jPrns xdT pjq j “ x1 ¨ ¨ ¨ xn px1 ¨ ¨ ¨ xnqn´2 . Proof. This is a straight calculation using Lemma 2.8. ÿ T PTn ź iPrns xdT piq i “ x1 ¨ ¨ ¨ xn ÿ P “pp1,...,pn´2qPrnsn´2 n´2 ź i“1 xpi “ x1 ¨ ¨ ¨ xn ÿ p1Prns ÿ p2Prns ¨ ¨ ¨ ÿ pn´2Prns n´2 ź i“1 xpi “ x1 ¨ ¨ ¨ xn ¨ ˝ ÿ p1Prns xp1 ˛ ‚ ¨ ˝ ÿ p2Prns xp1 ˛ ‚¨ ¨ ¨ ¨ ˝ ÿ pn´2Prns xpn´2 ˛ ‚ “ x1 ¨ ¨ ¨ xn px1 ¨ ¨ ¨ xnqn´2 . □ The idea of the Pr¨ ufer code can be extended to many other general kinds of graphs: complete bipartite graphs, complete multipartite graphs, and more. For a very general construction, see A. Kelmans, “Spanning trees of extended graphs”, Combinatorica 12 (1992), 45–51. 2.4. MSTs and Kruskal’s algorithm. Let G “ pV, Eq be a loopless graph equipped with a weight function wt : E Ñ Rě0. For a subset A Ď E, define wtpAq “ ÿ ePA wtpeq. How do we find a spanning tree of minimum total weight? The most naive algorithm is as follows. Find the cheapest edge and color it green. Then find the next cheapest edge and color it green (provided it isn’t parallel to the first edge). Then find the next cheapest edge and color it green (provided it isn’t parallel to either of the first two edges, or complete a C3). Keep coloring the cheapest edge available green, provided you never complete a cycle. 21 This procedure, called Kruskal’s algorithm, is very easy to understand and implement, but it is not clear that it works. But, amazingly, it does. The key to the proof is understanding the structure of the family of spanning trees of a graph G. We already know that all spanning trees have the same number of edges, but not just any family of sets of the same size can be T pGq for some G. In fact, any two trees interact in a very specific way: his procedure, called Kruskal’s algorithm, is very easy to understand and implement, but it is not clear that it works. But, amazingly, it does. Proposition 2.10 (Exchange rules for spanning trees). Let G be connected on n vertices, and let T, T 1 be spanning trees of G. Then: (1) For each e P EpTq ´ EpT 1q, there exists e1 P EpT 1q ´ EpTq such that T ´ e e1 is a spanning tree. (2) For each e P EpTq ´ EpT 1q, there exists e1 P EpT 1q ´ EpTq such that T 1 e ´ e1 is a spanning tree. (Note: I am using for the operation of adding an edge. This is different from G H which would denote the disjoint union of two graphs.) Proof. (1): T ´ e has exactly two components (shown in green and blue in the figure below). It suffices to choose e1 to be an edge of T 1 with one endpoint in each component of T ´e. Such an edge must exist because T 1 is connected. T’ T T’ e T − e (2): T 1 e has a cycle (since trees are maximally acyclic graphs); call it C. It is shown in yellow below. Then C Ę T (because T is acyclic), so pick e1 P CzT. Then T 1 e ´ e1 is still connected and has n ´ 1 edges, hence is a spanning tree. □ T T’ T’ + e e 22 You’ll actually prove a stronger fact on HW #2: given e P EpTq ´ EpT 1q, there exists e1 P EpT 1q ´ EpTq such that T ´ e e1 and T 1 e ´ e1 are both spanning trees of G. Here is a precise statement of Kruskal’s algorithm. Input: connected graph G with weight function w Output: a MST T T0 := H i “ 0 A := E # available edges while pV, T0q is disconnected and A ‰ H do Choose e P A of minimum weight A := A ´ e if Ti e is acyclic: Set Ti1 := Ti e Set i := i 1 Output T “ Ti Theorem 2.11. The output T of Kruskal’s algorithm is a MST of G. Proof. First, we check that the output is really a spanning tree. By construction, T is acyclic. If it is disconnected, then the algorithm made a mistake — pick any edge e that is a bridge of T e; that was still true at whatever stage in the algorithm e was considered (since Ti Ă T) so e should have been added, but wasn’t. Now comes the clever part. Let T ˚ be some MST (certainly one must exist since T pGq is finite). If T ˚ “ T, then we’re done. Otherwise, let e be the first edge chosen for T that is not in T ˚. Let F :“ tf P EpTq : f was added earlier than eu. ‚ By the choice of e we have F Ď EpT ˚q. ‚ By Prop. 2.1.7, we can choose e1 P EpT ˚q ´ EpTq so that T ˚˚ “ T ˚ ´ e1 e is a spanning tree. ‚ Note that e1 could not have been available at the stage of the algorithm when e was added to T. Otherwise it would have been added to T (since F e1 Ă EpT ˚q is acyclic) and then would be in F (by the choice of e), hence in T ˚, but it isn’t. ‚ Therefore, e1 was considered after e, so wtpe1q ě wtpeq. ‚ In particular, wtpT ˚˚q ď wtpT ˚q. ‚ But T ˚ is a MST, so equality must hold, which means that in fact T ˚˚ is a minimum spanning tree. We have shown: if G has a MST T ˚ with |T X T ˚| “ k ă |T|, then it has a MST T ˚˚ with |T X T ˚˚| “ k 1. By induction (or iteration, if you prefer), T itself must be a MST. □ Some notes on Kruskal’s Algorithm: 1. Computational issues. If you are going to implement Kruskal’s Algorithm, it is best to first sort the edges in increasing order by weight. Probably the best way to check whether an edge can be added is to check that its endpoints are in different components. This means keeping track of which vertices lie in a common component, and updating that data whenever the algorithm successfully adds an edge. 23 2. Matroids. Note that the exchange rules (Proposition 2.10) hold in a more general context. Let E = set of edges set of vectors spanning a vector space S T, T 1 = spanning trees subsets of E that are bases for S Then Kruskal’s Algorithm can be used to find a minimum-weight basis. More generally, let E “ any finite set, w : E Ñ R, B “ collection of subsets of E of the same size satisfying the exchange properties. In this case B is called a matroid basis system, and Kruskal’s Algorithm can be used to find an element of B of minimum weight. In fact, matroids can be characterized as set systems for which Kruskal’s algorithm works. Matroids are of high importance both in combinatorial optimization and in algebraic combinatorics. 3. Prim’s Algorithm is another way to efficiently compute an MST. It works like this: Pick an arbitrary vertex v X := tvu T := H while X ‰ V do Choose e “ xy of minimum weight such that x P X, y R X T := T e X := X y Output T This method also produces an MST (proof omitted). It has the advantage of being somewhat easier to implement than Kruskal’s algorithm, because it is easier to keep track of the single vertex set X than the component structure of a graph. On the other hand, Prim’s algorithm is specifically about graphs and cannot be extended to matroids (the concept “cycle” has an analogue for matroids, but “vertex” doesn’t). 24 3. Matchings and Covers Throughout this section, G “ pV, Eq will be a connected simple graph. We will not generally distinguish between an edge set A Ď EpGq and the corresponding spanning subgraph pV pGq, Aq. Doing so is usually more trouble than it’s worth. Sometimes we’ll need to say whether we mean to include all vertices, or just the set of vertices V pAq incident to at least one edge of A. But the meaning of terms such as “acyclic” and “component” should be clear when they are applied to edge sets. Maybe this warning should be put in the first section of the notes next time. 3.1. Basic definitions. Definition 3.1. A vertex cover of G is a set Q Ď V pGq that contains at least one endpoint of every edge. An edge cover of G is a set L Ď EpGq that contains at least one edge incident to every vertex. Here are some pictures of vertex covers. Warning: “Minimal” and “minimum” don’t mean the same thing. In the first cover shown above (G “ C6, Q “ t1, 2, 4, 5u), the cover is minimal because no proper subset is a cover, but it is not minimum because G has a cover of strictly smaller cardinality. Of course, V itself is always a vertex cover, and E is always an edge cover (provided that G has no isolated vertices). The interesting problem is to try to find covers that are as small as possible. Definition 3.2. A matching on G is an edge set M Ă E that includes at most one edge incident to each vertex. A vertex is matched (or saturated) by M if it is incident to an edge in M. The set of matched vertices is denoted V pMq; note that |V pMq| “ 2|M|. The matching M is maximal if it is not contained in any strictly larger matching; maximum if it has the largest possible size among all matchings on G; and perfect if V pMq “ V pGq. More generally, a k-factor in G is defined to be a k-regular spanning subgraph; thus a perfect matching is a 1-factor. Warning: Again, “maximum” is a stronger condition than “maximal.” For example, in the figure below, the blue matching M is maximum; the red matching M 1 is maximal but not maximum. M u v x t s z y w M’ 25 The vertex analogue of a matching is a coclique (or independent set or stable set): a set of vertices such that no two are adjacent. In general we want to know how large a coclique or matching can be in a given graph. So we have four related notions: ‚ coclique: a set of vertices touching each edge at most once ‚ vertex cover: a set of vertices touching each edge at least once ‚ matching: a set of edges touching each vertex at most once ‚ edge cover: a set of edges touching each vertex at least once Define α “ maximum size of a coclique, β “ minimum size of a vertex cover, α1 “ maximum size of a matching, β1 “ minimum size of an edge cover. (This is West’s notation, which may or may not be standard. The mnemonic is that invariants without primes involve sets of vertices; the primed versions involve edges. The symbol α, the first letter of the Greek alphabet, is actually fairly standard for the size of the largest coclique in G. The last letter, ω, is the size of the largest clique.) The matching and edge cover problems are equivalent and can be solved in polynomial time. The coclique and vertex cover problems are equivalent and are NP-complete in general. However, all four problems are equivalent for bipartite graphs. Fortunately, many matching problems are naturally bipartite (e.g., matching job applicants with positions, students with advisors, or workers with tasks, or columns with rows). By the way, counting the maximum matchings of G is hard in general (although it can be done for, say, K2n and Kn,n). It is unknown even for such nice graphs as Qn. Example 3.3. For the cycle Cn, it should be clear that α “ α1 “ Yn 2 ] and β “ β1 “ Qn 2 U . In particular α β “ α1 β1 “ n, and equality holds throughout if and only if the cycle is even. The first observation holds for all graphs, as we will see; the second one suggests that bipartiteness may be important. Example 3.4. If G is bipartite, then each partite set is a vertex cover. On the other hand, a bipartite graph can have covers that are smaller than either partite set. (Example on the left below.) It seems plausible to try to build a minimum vertex cover by using vertices of large degree — but this doesn’t always work either. (For example, in the graph on the right below, the unique vertex of largest degree is x, but the unique minimum vertex cover is ta, b, cu.) x a b c 3.2. Equalities among matching and cover invariants. Proposition 3.5. α β “ n. Proof. A vertex set is a coclique iffits complement is a cover. In particular, the complement of a maximum coclique is a minimum cover. □ 26 Proposition 3.6 (Gallai’s identity). α1 β1 “ n. Proof. First, we show that α1 β1 ď n. Let M be a maximum matching, so that |M| “ α1. Let A be a collection of edges, each incident to one M-unmatched vertex. These edges are all distinct (otherwise M would not even be maximal), so |A| “ n ´ |V pMq| “ n ´ 2α1. On the other hand, A Y M is an edge cover, so |A Y M| ě β1 and (1) α1 β1 ď |M| |A Y M| “ 2|M| |A| “ n. Second, we show the opposite inequality. Let L be a minimum edge cover, so |L| “ β1. For every edge e “ xy P L, we must have either degLpxq “ 1 or degLpyq “ 1, otherwise e could be removed from L to yield a smaller edge cover. This implies that every component of L is a star; in particular it is acyclic, so |L| “ n ´ cpLq. Now construct a matching M by choosing one edge from every component of L. Then |M| “ cpLq ď α1 and (2) α1 β1 ě |M| |L| “ cpLq n ´ cpLq “ n. Now combining (1) and (2) finishes the proof. □ Note that once the proof is complete, it follows that equality had to hold in both (1) and (2). That means that the proof gives us an easy way of constructing either a matching form an edge cover, or vice versa. Proposition 3.7. α1 ď β. Proof. If M is a matching in G, then no vertex can cover more than one of the edges in M, so every vertex cover has to have at least |M| vertices. □ This kind of result is called a weak duality: every matching has size less than or equal to the size of every vertex cover. It implies that if we can find a matching and a vertex cover of the same size, then the matching must be maximum and the cover must be minimum — but does not guarantee that such a pair exists. In fact equality does not always hold, e.g., for an odd cycle. However, there is good news: Theorem 3.8 (K¨ onig-Egerv ary Theorem). If G is bipartite then α1 “ β. This is going to take some proof. We are actually going to construct an algorithm to enlarge a matching, one edge at a time. If the matching is maximum then the algorithm will actually certify that by producing a vertex cover of the same size as the matching. 3.3. Augmenting paths. Definition 3.9. Let M be a matching in G and P Ă G a path. A path P is M-alternating if its edges alternate between edges in M and edges not in M. It is M-augmenting if it is M-alternating and both endpoints are unmatched by M. 27 Matching Augmenting path v,u,x,y v y x z s w u t M Note that every M-augmenting path has an even number of vertices (two unmatched endpoints and an even number of matched interior vertices), hence odd length. If P is an M-augmenting p, q-path, then M△P is a matching with one more edge than M, where △means symmetric difference. (The operation of passing from M to M△P might be called “toggling P with respect to M”: take the edges in P X M out, and put the edges of PzM in.) One fact about the symmetric difference operation: if C “ A△B then A “ B△C and B “ A△C. (Each of these equations says exactly the same thing: every element is contained in an even number of the sets A, B, C.) The following lemma will be useful both immediately in proving Berge’s theorem, and also later in proving Tutte’s 1-factor theorem for nonbipartite matching. Lemma 3.10. Let M, N be matchings on G. Then every nontrivial component of M△N is either a path or an even cycle. Proof. Each vertex of G can have degree at most 2 in M△N. Therefore, every nontrivial component H is either a path or a cycle. If H is a cycle, then its edges alternate between edges of M and edges of N, so H is even. □ Theorem 3.11. (Berge, 1957) Let M be a matching on G. Then M is maximum if and only if G contains no M-augmenting path. Proof. p ù ñ q If M has an augmenting path P then M△P is a bigger matching. ( ð ù ) Suppose that M is a non-maximum matching. Let N be a matching with |N| ą |M|, and let F “ M△N. Then |F X N| ą |F X M|, because |F X N| “ |N| ´ |M X N| ą |F X M| “ |M| ´ |M X N|. In particular, some component H of F contains more edges of N than of M. By Lemma 3.10, H must be a path of odd length ℓ, say H “ v0, v1, . . . , vℓwith v0v1 P N, v1v2 P M, v2v3 P N, . . . , vℓ´2vℓ´1 P M, vℓ´1vℓP N. Now, both F and N contain exactly one edge incident to v1, namely v1v2. But F “ M△N, so M “ F△N, and so M “ F△N does not contain any edge incident to v1. By the same logic, v2n R V pMq as well. But this says precisely that H is an M-augmenting path. □ 28 Berge’s theorem reduces the problem of finding a maximum matching to the problem of determining whether a given matching has an augmenting path. This problem is easier when G is bipartite. Notation: Npxq “ tneighbors of xu, NpXq “ Ť xPX Npxq Here is the algorithm. It is really a form of breadth-first search. (1) Start at an unmatched vertex x0 P X. Let S0 “ tx0u. (2) Put in the edges x0y for all y P Npx0q Ă Y . (3) If any vertex added in step (2) is unmatched, then we have an augmenting path. Else, put in the edges that match every y P Npx0q to its spouse (which must lie in x). Call this set of spouses S1. (4) Put in the vertices y and edges sy for all s P S1 and y P Npsq. (5) If any vertex added in step (4) is unmatched, then we have an augmenting path. Else, put in the edges that match every y P Npx0q to its spouse (which must lie in x). Call this set of spouses S2. (8) Iterate until either we find an augmenting path, or no new vertices are found in an even-numbered step. If the algorithm does not find an augmenting path, repeat for every possible unmatched starting vertex v0 P XzV pMq. Example. Let us run the algorithm on the following graph and matching: 3 8 5 2 6 5 7 1 8 7 1 4 2 6 3 4 Y x x x x x x x y y y y y y y y X x Here are the search trees we get by using x3 and x7, respectively, as the starting vertices: 7 5 2 3 1 7 1 1 3 4 2 8 8 6 6 4 1 5 x y y x x y y y y x x x x y y x y x 29 If we start at x3, the search peters out quickly, and no augmenting path is found. Starting at x7, however, finds an augmenting path P Here is what P looks like in the original graph, and the larger matching M 1 “ M△P: ∆ M P P M 8 7 8 6 5 4 3 2 1 1 2 3 4 5 6 7 8 7 8 6 5 4 3 2 1 1 2 3 4 5 6 7 x y y y y y y y y x x x x x x x x y y y y y y x x x x x x x y y At this point, the search will find no augmenting path — we start at x3, construct the two-edge search tree x3 ´ y1 ´ x1, and terminate. Note that the search has told us that Nptx1, x3uq Ď ty1u. But that means that Q “ ty1u Y pXztx1, x3uq is a vertex cover, and |Q| “ 7 “ |M 1|, which verifies (by weak duality) that M 1 is a maximum matching and Q is a minimum vertex cover. Proposition 3.12. Let G be a bipartite graph and M a matching with no augmenting path. Let U “ UXYUY be the set of vertices visited in the (unsuccessful) call to the Augmenting Path Algorithm. Then Q “ UY Y pV pMqzUXq is a vertex cover of cardinality equal to |M|. Proof. First, we show that Q is a vertex cover. Note that NpUXq Ă UY by the construction of the APA. So if e “ xy with x P UX, then y P NpUXq Ď UY Ď Q, while if x ­ inUX then x P Q. What about its cardinality? Every vertex in Q is matched. On the other hand, the vertices in UY are matched to vertices in UX, which means that no edge of M meets Q more than once. Therefore, |Q| ď |M|, and the reverse equality holds by weak duality. □ Here is another example. The matching on the left is maximum, with the unsuccessful search trees shown on the right. We have therefore UX “ tx1, x2, x3, x5, x6u, UY “ ty2, y3, y4u which says that Q “ UY Y pV pMqzUXq “ ty2, y3, y4, x4, x7, x8u is a vertex cover, as shown. 30 2 5 3 5 1 1 6 3 8 7 1 2 3 4 2 6 3 1 7 4 6 6 3 8 5 2 2 4 2 4 x x x x x x x x x y y y y y y y x y y x y x y x x y x y x y The punchline: Berge’s Theorem together with Prop. 3.12 proves the K¨ onig-Egerv´ ary Theorem: β “ α1 for all bipartite graphs. 3.4. Hall’s Theorem and consequences. Hall’s Matching Theorem is a classical theorem about matchings with lots of proofs; the one I like uses the Augmenting Path Algorithm. (This is not Hall’s original proof.) Theorem 3.13 (Hall’s Matching Theorem, 1935). Let G be an X, Y -bigraph. Then G has a matching saturating X if and only if |NpSq| ě |S| for every S Ď X. Proof. Necessity of Hall’s condition ( ù ñ ): Let M be a matching saturating X. Then every S Ď X is matched to a set of equal cardinality, which is a subset of NpSq. Sufficiency of Hall’s condition ( ð ù ): Let M be a maximum matching, and suppose that x P X is unsaturated. Consider the unsuccessful search tree computed by the Augmenting Path Algorithm starting at x. Call its vertex set S Y T, with S Ď X and T Ď Y . Observe that: ‚ |S| ą |T|, since every vertex in T is matched to a vertex in S, and S contains at least one unmatched vertex, namely x. ‚ NpSq “ T, since that is precisely how the algorithm works. Therefore, S violates Hall’s condition. □ Hall’s Theorem is not useful as an algorithm because actually computing |NpSq| for every S Ă X would require looking at all 2|X| subsets. On the other hand, it is a great theoretical tool. Here are some conse-quences: Corollary 3.14. Every regular bipartite simple graph has a perfect matching. Proof. Let G be a k-regular X, Y -bigraph. By bipartite handshaking, epGq “ k|X| “ k|Y |, so in particular |X| “ |Y |. Let S Ď X and consider the induced subgraph H “ G|SYNpSq, which is bipartite with partite sets S and NpSq. Each vertex of S has degree exactly k in H, and each vertex of NpSq has degree at most k in H. By the bipartite handshaking formula, |S| ď NpSq. Since S was arbitrary, Hall’s Theorem implies that G has a perfect matching. □ 31 Corollary 3.15. Every k-regular bipartite simple graph decomposes into the union of k perfect matchings. (Here “decomposes” refers to the edge set.) This corollary can be rephrased in terms of matrices. A simple bipartite graph can be recorded by its bipartite adjacency matrix, with a row for each vertex in X and a column for each vertex Y , with edges indicated by 1’s and non-edges by 0’s. The graph is k-regular iffevery column and row sum is k (which requires the numbers of columns and rows to be the same). A matching corresponds to a transversal: a collection of 1’s including exactly one entry in every row and column (this is essentially the same thing as a permutation matrix). The corollary then says that every n ˆ n 0,1-matrix with all row and column sums equal to k can be written as the sum of k permutation matrices. 3.5. Weighted bipartite matching and the Hungarian Algorithm. Let G be a bipartite graph with partite sets X, Y , and let w : EpGq Ñ Rě0 be a weight function. Problem: Find a matching M of maximum total weight, i.e., maximizing wpMq “ ÿ ePM wpeq. WLOG, we may assume that |X| “ |Y | “ n (adding isolated vertices if necessary), and that G – Kn,n (adding edges of weight 0 if necessary). Then the maximum cardinality matchings are the n! perfect matchings, and we may as well look for one of them. Represent the pair pG, wq by an n ˆ n matrix W “ pwijqi,jPrns where wij “ wpxiyjq. Definition 3.16. A transversal of W is a set of n matrix entries, one in each row and column. (Equivalent to a perfect matching on Kn,n; can be described by a permutation σ : rns Ñ rns.) Definition 3.17. A (weighted cover) C of W is a list of row labels a1, . . . , an and column labels b1, . . . , bn such that (3) @pi, jq P rns ˆ rns : ai bj ě wpxiyjq. The cost of the cover is |C| “ n ř i“1 ai řn i“1 bi. Example: 0 1 5 4 4 2 weight matrix W transversal of weight 14 cover of cost 19 0 1 5 6 3 4 2 2 0 0 0 1 5 4 4 2 0 1 5 6 3 4 2 2 0 0 0 1 5 4 4 2 4 4 5 3 0 1 2 0 1 5 6 3 4 2 2 0 0 0 Problem: Given a square nonnegative integer matrix, find a cover of minimum-cost. 32 Lemma 3.18. The maximum weighted matching and minimum weighted cover problems are weakly dual. That is, for every matching M and cover C, wpMq ď |C|. Moreover, equality holds if and only if wpxiyjq “ ai bj for every edge xiyj P M. In that case, M and C are optimal. Proof. Represent M by a transversal σ of the weight matrix W. The cover condition (3) says that wi,σpiq ď ai bσpiq for all i, so (4) wpMq “ n ÿ i“1 wi,σpiq ď n ÿ i“1 pai bσpiqq “ |C|. and equality holds in (4) if and only if it holds for each i, since the inequality is term-by-term. □ The cover shown above has cost 19. Can this be improved? If we could find a column or a row in which every entry was overcovered (i.e., for which the inequality (4) was strict), then we could decrease the label of that column. But there is not always such a column or row. The good news is that we can do something even more general in the spirit of the Augmenting Path Algorithm. The key is to increase the cover on some columns by some amount ϵ and decrease it on some rows by the same ϵ, amount, making sure that we decrease more labels than we increase. To find this, circle the matrix entries that are covered exactly, i.e., those wij such that wij “ ai bj. The corresponding edges form a spanning subgraph H Ď Kn,n called the equality subgraph H “ EqpW, Cq. (rows) 0 4 3 2 1 y y y y (columns) x x x x4 3 2 1 |C| = 19 0 1 5 6 3 4 2 2 0 0 0 1 5 4 4 2 4 4 5 3 0 1 2 Now run the Augmenting Path Algorithm to find a maximum matching M on H, together with a minimum vertex cover Q. Recall that |M| “ |Q| by the K¨ onig-Egerv¨ ary Theorem (which we proved using the APA), and that Q “ pXzUXq Y UY , where U is the set of vertices reached during the last (unsuccessful) search for an augmenting path. (Note that there is no guarantee of uniqueness for M and Q, because H may have several different maximum matchings and minimum vertex covers, but the APA will certainly produce one of each.) A possible output is shown below. 0 x x x x4 3 2 1 4 4 2 4 4 5 3 0 1 2 4 3 2 1 y y y y (columns) (rows) matching M search set U vertex cover Q 0 1 5 6 3 4 2 2 0 0 0 1 5 33 If |M| “ |Q| “ n, then wpMq “ |C| and we are done by weak duality (Lemma 3.18). Otherwise, we can use Q to find a less expensive cover, as follows. In terms of the matrix, Q “ QX Y QY corresponds to a collection of rows and columns (which we’ll also call QX and QY ), of total cardinality ă n, containing every circled matrix entry. Construct the excess matrix, whose pi, jq entry is wij ´ ai ´ aj. (So the zeros in this matrix correspond precisely to edges of H.) Then paint blue the rows and columns corresponding to the vertices in Q. The numbers not painted blue in the excess matrix must all be positive; their minimum is the tolerance, denoted ε. Here ε “ 2. Now decrease the labels on XzQX by ε, and increase the labels on all columns in QY by ε. This is shown in the third matrix below, with the red arrows indicating which labels have been increased or decreased. ε = 2 4 The equality graph The excess matrix Improved cover and its vertex cover Tolerance: 0 1 5 6 3 4 2 2 0 0 0 1 5 4 4 2 4 3 0 0 4 4 5 3 0 1 2 0 3 3 2 2 0 1 5 6 3 4 2 2 0 0 0 1 5 4 4 2 4 4 5 3 0 1 2 0 5 2 0 0 0 0 5 2 5 6 3 0 0 2 3 This operation maintains the cover conditions, since the only matrix entries that decrease are those with rows in XzQ and columns in Y zQY , but all those entries were already over-covered by at least ε. Moreover, the cost of the cover has dropped by εpn ´ |Q|q. Repeat this procedure until the equality subgraph has a perfect matching. In this case, it just takes one more step. Improved cover 2 0 0 0 2 3 0 4 7 4 1 0 0 2 1 2 The excess matrix The equality graph and its vertex cover Tolerance: ε = 1 3 0 1 5 6 3 4 2 2 0 0 0 1 5 4 4 2 3 4 3 0 2 0 0 1 5 6 3 4 2 2 0 0 0 1 5 4 4 2 4 0 2 0 3 3 2 3 3 3 2 0 4 1 2 3 1 Now we’re done — we have a perfect matching whose weight equals the cost of the cover.2 This procedure is called the Hungarian Algorithm. Here is a summary of the algorithm. 2Thanks to Lawrence Chen for catching a mistake in an earlier example of the procedure. 34 The Hungarian Algorithm Input: weight function w : EpKn,nq Ñ R Output: a maximum weighted matching M and minimum cover C “ pa1, . . . , an, b1, . . . , bnq (0) Initialize ai “ maxtwi1, . . . , winu and bj “ 0 for all i, j P rns (1) H = txiyj | ai bj “ wiju (2) Use the APA to compute a maximum matching M and a minimum cover Q (3) while |Q| ă n do (4) Let ε :“ mintwij ´ ai ´ bj : xi, yj R Qu (5) Set ai :“ ai ´ ε for all xi P XzQX (6) Set bj :“ bj ε for all yj P QY (7) Recompute H (8) Use the APA (starting with M) to recompute the pair M, Q (9) Return pM, Cq One notn-obvious fact is that after the cover is adjusted in steps 5 and 6, the new equality subgraph computed in step 7 will still have M as a matching. This is left as an exercise. Observe that if all the weights were nonnegative integers to begin with, then the procedure will definitely terminate (in at most a number of steps equal to the original cost of the cover). We have proved: Theorem 3.19. For any bipartite graph G and weight function w : EpGq Ñ Ně0, the Hungarian Algorithm calculates a minimum cover C “ pa, bq and a maximum matching M, with |C| “ wpMq. We don’t need to assume positive weights, since adding the same constant to all n2 edges does not change which matchings have maximum weight. Also, N can be replaced with Q — there are finitely many weights, so just multiply them all weights by some common denominator to convert them to integers. Again, this will not change which matchings have maximum weight. What about real weights? The potential problem is that the sequence of cover values produced by the Hungarian Algorithm might be something like 2, 1.1, 1.01, 1.001, 1.0001, . . . , and the algorithm might never terminate, even though the minimum cover value is actually 1. Fortunately, this doesn’t happen, for purely combinatorial reasons, and moreover the number of steps is at worst quadratic in n. The proof is left as an exercise; carefully examining the following example should show you why. Example 3.20. Consider the weighted copy of K5,5 with the following weight matrix and cover: 2 5 7 4 3 26 14 31 20 10 16 45 20 25 23 44 25 18 20 10 25 8 21 37 21 34 25 21 15 28 16 20 23 32 16 (The cover has been cooked up carefully to demonstrate how the algorithm works.) The following figures show the progress of the algorithm. Each iteration shows, left to right: (1) The current cover, and the edges for which equality holds (circled in blue) (2) The equality subgraph (shown as a graph), together with the output of the APA (3) The excess matrix and the resulting tolerance value (4) The improved cover 35 Iteration #1 2 y x4 x5 5 y 21 31 44 34 23 21 21 20 25 23 25 20 20 25 16 16 8 10 10 14 15 16 23 15 16 3 2 1 y y y x x x3 2 1 (rows) (columns) search set U matching M vertex cover Q 20 18 8 27 25 29 5 23 0 0 0 0 0 20 14 13 13 13 14 14 13 12 19 15 20 25 32 21 31 44 34 23 21 21 20 25 23 25 20 20 25 16 16 8 10 10 14 26 40 18 28 32 2 5 7 4 3 4 5 7 4 3 26 45 18 28 37 2 5 7 4 3 26 45 18 28 37 20 25 32 ε = 5 Improved cover Cover and equality subgraph The Augmenting Path Algorithm Cover cost = 175 Tolerance The excess matrix Iteration #2 28 y x4 x5 5 y 8 10 10 14 40 40 18 32 32 18 0 24 20 22 13 3 14 37 29 25 7 10 16 18 15 16 15 3 2 1 y y y x x x3 2 1 (rows) (columns) 20 0 0 0 0 0 20 14 13 13 13 14 14 13 12 20 25 32 21 31 44 34 23 21 21 20 25 23 25 20 20 25 16 16 8 10 14 26 18 2 5 7 3 2 5 7 4 3 26 28 2 5 7 4 3 26 18 4 20 25 32 21 31 44 34 23 21 21 20 25 23 25 20 20 25 15 16 16 Tolerance The excess matrix ε = 3 Improved cover Cover and equality subgraph The Augmenting Path Algorithm Cover cost = 165 Iteration #3 20 y x4 x5 5 y 7 37 29 25 23 17 19 17 21 15 11 0 10 11 10 9 16 15 12 17 15 16 17 28 20 16 10 16 14 3 2 1 y y y x x x3 2 1 (rows) (columns) 20 0 0 0 0 0 14 13 13 13 20 25 32 21 31 44 34 23 21 21 20 25 23 25 20 20 25 16 16 8 10 14 18 2 7 3 2 5 7 3 26 2 5 7 3 26 18 20 25 32 21 31 44 34 23 21 21 4 25 23 25 20 20 25 16 16 8 10 14 18 0 37 29 25 7 10 Tolerance The excess matrix ε = 9 Improved cover Cover and equality subgraph The Augmenting Path Algorithm Cover cost = 159 Iteration #4 16 y x4 x5 5 y 1 y x3 2 7 3 16 14 18 17 28 20 16 5 2 7 3 16 14 18 17 28 20 16 10 12 6 22 26 1 2 15 0 3 8 2 28 16 15 19 4 3 2 y y x x2 1 (rows) (columns) 20 25 32 21 31 44 34 23 21 21 20 25 23 25 20 20 25 16 16 8 10 14 15 10 16 0 0 0 0 0 20 25 32 21 31 44 34 23 21 21 20 25 23 25 20 20 25 16 16 8 10 14 18 2 7 3 0 23 17 0 10 15 16 10 16 4 4 ε = 1 Improved cover and solution Cover and equality subgraph The Augmenting Path Algorithm Cover cost = 141 (aug path found; new matching) Tolerance The excess matrix 36 3.6. Stable matchings. (Note: This material appears in the 3rd and subsequent editions of Diestel (2005); it is not in the 2nd edition (2000).) You are the chair of the mathematics department, and you have to assign n students X “ tx1, . . . , xnu to n advisors Y “ ty1, . . . , ynu. That is, you need to choose a perfect matching in Kn,n. (Let’s suppose n “ 3, so that the problem is a manageable size.) You’d like to do this in some way that reflects the preferences of each student and advisor, and you’ve asked each person to submit a form listing his or her three preferences in descending order. The data you have is as follows: Student Preference order x1 y1, y2, y3 x2 y1, y3, y2 x3 y1, y3, y2 Advisor Preference order y1 x1, x2, x3 y2 x1, x3, x2 y3 x3, x2, x1 Unfortunately, it’s not so clear what “optimal” means. Should we be trying to maximize overall happiness? Are the students’ preferences more important than the advisors’, or vice versa? Since we know how to solve the weighted bipartite matching, we could try the following approach. Each student assigns two points to his top-choice advisor and one point to his second choice. Each advisor assigns two points to her top-choice student and one point to her second choice. We then get a weighted copy of K3,3, with weight matrix as follows: y1 y2 y3 x1 2 2 “ 4 1 2 “ 3 0 0 “ 0 x2 2 1 “ 3 0 0 “ 0 1 1 “ 2 x3 2 0 “ 2 0 1 “ 1 1 2 “ 3 We now select the matching of maximum total weight. Since n is so small, we can just do this by brute force instead of applying the Hungarian Algorithm. It’s not hard to verify that the unique maximum-weight matching is M “ tx1y2, x2y1, x3y3u. There is a problem with this solution, though. x1 and y1 are each other’s first choice, yet they are not paired in M. So what’s to prevent them from deciding to work with each other, leaving x2 and y2 high and dry? Those two could pair up, it’s true, but neither of them would be at all happy about it—each is the other’s last choice. They’d certainly have cause to complain about the system. Meanwhile, if you use your power as chair to prevent x1 and y1 from working together, neither of them is going to thank you for it. Maybe looking for a maximum-weight matching is not the way to go. What you’d really like is a system that will produce a matching in which no advisor-swapping will take place: that is, a stable matching. Not everybody will necessarily be happy, but at least no one (we hope) will have reason to complain about the system’s fairness, and no one will have an incentive to ignore the rules of the game. Definition 3.21. Let M be a perfect matching on Kn,n, and write Mpzq for the vertex matched to z. An unstable pair in a matching M is a pair px, yq such that x prefers y over Mpxq and y prefers x over Mpyq. A matching is stable if it has no unstable pair. It is by no means obvious that a stable matching always exists for any given list of preferences—but it does. It can be found by the following algorithm, due to Gale and Shapley. We designate one partite set X as the proposers and the other, Y , as the responders. For clarity in pronouns, I will refer to the elements of X as male and the elements of Y as female, but this could just as easily be switched 37 The Gale-Shapley Stable Matching Algorithm — Deterministic Version ‚ Each proposer proposes to the top-choice responder who has not already rejected him. ‚ If the set of proposals forms a perfect matching, that matching is the output. ‚ Otherwise, each responder rejects all but her top choices of the proposals she receives. ‚ Repeat until the proposals form a perfect matching. Theorem 3.22. The Gale-Shapley algorithm terminates and produces a stable matching. Proof. First, we show that the algorithm does not get stuck — in another words, we never reach a situation in which some proposer has been rejected by every responder. Here is why this can’t happen. (1) If y ever issues a rejection, then she will receive at least one proposal at every subsequent stage of the algorithm. (Whoever survived y’s previous cut will keep proposing to y until replaced by someone else.) (2) Suppose that x has already been rejected by n ´ 1 responders. In the iteration in which x does so, every responder other than y must receive at least one proposal (since they have each previously issued at least one rejection). But by the pigeonhole principle, that means that every responder receives exactly one proposal, and so the algorithm terminates with a perfect matching M. By the way, each iteration that does not terminate removes at least one edge from the graph of potential matched pairs. Therefore, the algorithm terminates in at most n2 ´ n iterations. Now, suppose that px, yq is an unstable pair. That is, ‚ x prefers y to Mpxq, ‚ y prefers x to Mpyq, ‚ x and y aren’t matched. Since y prefers x to Mpyq, it follows that x never proposed to y. But that must mean that x prefers Mpxq to y, which contradicts the definition of unstable pair. We conclude that M must be a stable matching. □ The algorithm would run equally well if the roles of advisors and students were switched. It turns out that the side doing the proposing is uniformly better off, and the side being proposed to is uniformly worse off. That is, if M is the matching produced by the Gale-Shapley algorithm and N is any other stable matching, then every x P X prefers Mpxq to Npxq (unless Mpxq “ Npxq) and every y prefers Npyq to Mpyq (unless Mpyq “ Npyq). The moral of the story is that it is better to propose than to be proposed to. When I talked about this in class on 2/16/16, the Math 725 students asked several questions, which I list here with my attempts to answer them. (1) Does the algorithm still work if the partite sets do not have the same size? (Brandon Caudell) Yes, I think so. If there are more proposers than responders, the algorithm terminates when every responder receives at least one proposal (and rejects all but her top choice). If there are more responders, the algorithm terminates when every responder receives at most one proposal. The first part of the proof can be modified to show that the termination state is indeed reached, and the second part (i.e., that the output matching is stable) makes no reference to cardinality and thus still goes through. (2) What if some proposer or responder has ties among his or her preferences? (Joe Cummings) This is a subtle problem; see this Wikipedia article. Now you might have a unmatched pair x, y in which x prefers y over Mpxq, but y is indifferent between x and Mpyq; we might call such a pair weakly unstable. In this case, we’d have to do some more work to produce a matching without any pair that is even weakly unstable — it is not clear whether such a matching always exists. 38 (3) Can there be a preference list where no proposer gets her first choice under Gale-Shapley? Can there be a preference list where no proposer or responder gets her first choice under Gale-Shapley? Yes to both. Joseph Doolittle came up with an example in class for n “ 4. (4) Does Gale-Shapley produce a universally optimal result for every proposer (i.e., if M is the Gale-Shapley output and M 1 is any stable matching, then each proposer is at least as happy under M than under M 1)? Likewise, does Gale-Shapley produce a universally pessimal3 result for every responder? Yes. See below. (5) Can there be a preference list where no proposer gets his first choice under any stable matching? If you believe that the answers to (3) and (4) are yes, then the answer to this one is yes as well. (6) Can there be a preference list where no proposer or responder gets his/her first choice under any stable matching? Probably, but I haven’t constructed one. (7) Can there be a preference list where the algorithm takes the maximum number of iterations to finish, i.e., n2 ´ n? I am pretty sure the answer to this one is yes. (8) What about Kn instead of Kn,n? In other words, suppose that each vertex in Kn has a preference order on the n´1 other vertices, and we still want to find a stable matching (let’s assume n is even). This is called the stable roommates problem, and it does not always have a solution, but it is possible to check in polynomial time whether a solution exists and, if so, to find it. The “rounds” in the Gale-Shapley algorithm are in fact unnecessary. Consider the following “non-deterministic” version of the algorithm: The Gale-Shapley Stable Matching Algorithm — Nondeterministic Version ‚ Let x be some proposer who is not currently on a wait list. ‚ x proposes to the top-choice responder y who has not already rejected him. ‚ If y’s wait list was empty, then she puts x on it. Otherwise, y chooses between x and the proposer currently on her wait list, and rejects the one she likes less. ‚ The algorithm terminates when all wait lists are full and we have a perfect matching. It turns out that no matter how x is chosen in each iteration, the algorithm produces the same matching. The proof is an exercise, and is a key step in proving that Gale-Shapley is universally proposer-optimal and responder-pessimal. A little more generally, one can also prove that no stable matching is universally better than any other, in the following sense: Proposition 3.23. Let M, M 1 be stable matchings for the same preference list. If at least one proposer x is happier in M than in M 1, than at least one responder y is happier in M 1 than in M. The proof is left as an exercise. 3.7. Nonbipartite matching. Matching is harder without the assumption of bipartiteness. The K¨ onig– Egerv´ ary Theorem (α1 “ β) need not hold: for instance, the odd cycle C2n1 has matching number α1 “ n and vertex cover number β “ n 1. (In fact the gap between the matching and cover numbers can be arbitrarily large.) In particular, the Augmenting Path and Hungarian Algorithms don’t work, although there do exist polynomial-time algorithms to compute maximum-cardinality and maximum-weight matchings. One maximum-cardinality matching is Edmonds’ Blossom Algorithm; this might make a good end-of-semester project. We are not going to look at algorithms, but we will prove two fundamental results in this area, namely Tutte’s 1-Factor Theorem and the Berge-Tutte Formula. (Recall that “1-factor” is a synonym for “perfect matching” and more generally that “k-factor” means (the edge set of a) k-regular subgraph.) 3“Pessimal” is the antonym of “optimal.” 39 For a simple graph G, not necessarily connected, define opGq “ number of odd(-order) components of G. Lemma 3.24. Let H be a spanning subgraph of G. Then opHq “ opGq 2k ” n pmod 2q. Proof. Construct G from H by starting with n isolated vertices (hence n odd components) adding edges, one at a time. Each addition either doesn’t change the component structure; makes an even component out of two even components; makes an odd component out of an even and an odd component; or makes an even component out of two odd components. The first three cases don’t change the number of odd components; the last case decreases it by 2. □ Theorem 3.25 (Tutte’s 1-Factor Theorem). A simple graph G “ pV, Eq has a perfect matching if and only if it satisfies Tutte’s condition: (5) opG ´ Sq ď |S| @S Ď V. Proof. ( ù ñ ) Let M be a perfect matching of G, let S Ď V , and let ¯ S “ V zS. Consider the graph H with vertex set ¯ S and edges M| ¯ S. This is a spanning subgraph of G ´ S, so opG ´ Sq ď opHq “ # isolated vertices in H “ # vertices in ¯ S matched to a vertex in S “ # vertices in S matched to a vertex in ¯ S (using M as a bijection) ď |S|. ( ð ù ) First, observe that by the argument of Lemma 3.24 adding one or more edges to G can only decrease the LHS of (5), hence preserves Tutte’s condition. Accordingly, if the ð ù direction is false, we can choose a maximal graph for which it fails — i.e., a graph G such that ‚ G satisfies Tutte’s condition; ‚ G has no perfect matching; and ‚ adding any single missing edge to G produces a graph with a perfect matching. We will show that these conditions imply a contradiction. First, since G satisfies Tutte’s condition, in particular opG ´ Hq “ opGq ď |S| “ 0, so npGq is even by Lemma 3.24. Define U “ tv P V | vw P E @w P V ztvuu. We will examine the graph G ´ U. Case 1: G ´ U is a disjoint union of cliques. We can construct a perfect matching on G as follows. Start with a maximal matching on G ´ U; the number of leftover vertices is opG ´ Uq. But by Tutte’s condition, there are at least that many vertices in U, so all of those leftovers can be matched to vertices of U (in lots of ways). We have now matched all vertices outside U, and GrUs is a clique, so the remaining vertices of U can be matched to each other (in lots of ways). Case 2: G ´ U has some component H that is not a clique. Then H must contain an induced P3, i.e., x, y, z P V pHq and xy, yz P EpHq, xz R EpHq. Also, since y R U, there is a vertex w P V pG ´ Uq such that wy R EpGq. Note that w may or may not belong to H. Here’s the picture. The vertices of U are colored gray, and all edges with one or both endpoints in U are omitted (otherwise the figure would be unreadable; remember, every gray vertex is adjacent to every other vertex.) 40 H w z y x By the choice of G, adding any single edge to G produces a graph with a perfect matching (which must contain that added edge). Accordingly, let M1 and M2 be perfect matchings of Gxz and Gwy respectively. b 2 a 1 a 2 b 1 b 3 a 4 a 3 a 7 a 6 a 5 w z y x M1 M2 The dashed edges wy and xz do not belong to G; all other edges do. Let F “ M1△M2; then xz, wy P F. Let C be the component of F containing xz (highlighted above). By Lemma 3.10, C is either a path or an even cycle. In fact, it’s a cycle, because M1 and M2 are both perfect matchings.4 Case 2a: yw R C (not the case of the example). Then M1△C “ M2 X EpCq ˘ Y M1 ´ EpCq ˘ is a perfect matching that contains neither xz nor wy, so it is a perfect matching of G. That’s a contradiction — we had assumed that G contained no perfect matching. Case 2b: yw P C. Label the vertices in cyclic order as w, y, a1, . . . , ap, z, x, b1, bq. (It is possible that x and z are switched, but that case is equivalent because we have made no distinction between these vertices—they can be interchanged.) Note also that the numbers p and q are both odd (in the example, p “ 7 and q “ 3). This is because the path y, a1, . . . , ap, z has the same number of edges in M1 and M2, hence has an even number of edges and an odd number of vertices. Meanwhile, |V pCq| “ 4 p q is even, so p and q have the same parity. Now, the edge set M ˚ “ ta1a2, . . . , ap´2ap´1, apz, yx, b1b2, . . . , bq´2bq´1, bqwu Ď EpGq 4 In more detail: If C were a path, then both of its endpoints would have to be in V pM1△M2q. OTOH, V pM1△M2q Ď V pM1q△V pM2q for any two matchings M1, M2 (this isn’t hard to see). In this case, they’re both perfect matchings, so V pM1q△V pM2q “ H. 41 (shown in green below) is a perfect matching on V pCq. Since M1 ´ EpCq (shown in yellow) is a perfect matching on V pGq ´ V pCq, it follows that pM1 ´ EpCqq Y M ˚ is a perfect matching of G, as desired. □ 3 a 7 a 6 a 5 a 4 2 a w z y x a 3 b 1 b 2 a 1 b Corollary 3.26 (Berge–Tutte Formula). Let G “ pV, Eq, n “ npGq, and for S Ď V define upSq “ opG ´ Sq ´ |S|. Let m “ maxtupSq | S Ď V pGqu. Then (6) α1pGq “ n ´ m 2 . The number upSq measures the extent to which S forms an obstruction to the existence of a perfect matching. Note that upHq “ 0, so the formula does say that α1 ď n{2. Also, Tutte’s condition says precisely that α1 “ n{2 iffupSq ď 0 for all S, so formula (6) generalizes Tutte’s theorem. Proof. Step 1: Prove that α1 1 1 ď ď ď p p pn ´ ´ ´ mq q q{ { {2. This is the easier step. Let M be a matching and S Ď V pGq. For each odd component H of G ´ S, either H has a vertex whose spouse is in S, or else H has an M-unmatched vertex. Therefore, there are at least upSq vertices which are not matched by M, and at most n ´ upSq matched vertices. So |V pMq| ď n ´ upSq. This is true for all S, so |V pMq| ď n ´ m, and dividing by 2 gives the desired inequality. Step 2: Prove that α1 1 1 ě ě ě p p pn ´ ´ ´ mq q q{ { {2. First note that m ě upHq “ opGq ě 0 and m ” n pmod 2q by Lemma 3.24. Let ˜ G “ G _ Km, where _ denotes the join. That is, ˜ G is obtained from the disjoint union G Km by adding an edge between every vertex of ˜ G and every vertex of Km. (If you like, ˜ G “ ¯ G ¯ Km.) We claim that ˜ G satisfies Tutte’s Condition (5). (Details left to reader; it’s routine using the definition of join.) By Tutte’s 1-Factor Theorem, ˜ G has a perfect matching M. ‚ At most m edges of M have endpoints in Km. ‚ Deleting all such edges yields a matching of G that matches at least np ˜ Gq ´ 2m “ n ´ m vertices. ‚ So α1 ě pn ´ mq{2. □ Like Hall’s Theorem 3.13 (which it implies), the Tutte-Berge Theorem does not yield an efficient algorithm for calculating α1 — you would have to calculate upSq for every S Ď V pGq — but it is a useful theoretical tool. Here is a famous corollary. Theorem 3.27 (Petersen’s theorem). Let G be a 3-regular simple graph with no bridge. Then G has a perfect matching. 42 Proof. Fix S Ď V pGq and let k “ opG ´ Sq. If k “ 0 then there is nothing to prove. Otherwise, let H be some odd component of GS. Then a “ ř xPH dGpxq “ 3|H| is odd, but on the other hand b “ ř xPH dHpxq is even. So a ´ b, which is the number of edges from H to S, is a positive odd number, but it cannot be 1 because then the edge it counts would be a bridge, so it is at least 3. This is true for every component H, so the number of edges from G ´ S to S is at least 3opG ´ Sq, but on the other hand it is also at most 3|S|, again because G is 3-regular. It follows that |S| ě opG ´ Sq, and we have proved that G satisfies Tutte’s condition. □ Hall’s Marriage Theorem can be proved from Tutte’s 1-Factor Theorem; this is left as an exercise. 43 4. Connectivity, Cuts, and Flows 4.1. Vertex connectivity. Let G “ pV, Eq be a simple connected graph, and let n “ npGq ě 2. Definition 4.1. A separator (vertex cut, separating set) of G “ pV, Eq is a vertex set S Ĺ V such that G ´ S is disconnected or has only one vertex. Two vertices x, y are separated by S if they are in different components of G ´ S (equivalently, every x, y-path has an internal vertex in S). In the following figure, the vertex sets circled in red are separators. For example, Npvq is a separator for any v P V , and a cut-vertex is just a separator of cardinality 1. If G – Cn, then any two nonadjacent vertices of G form a separator Definition 4.2. Let x ‰ y P V pGq. The local (vertex) connectivity of the pair x, y is κpx, yq “ κGpx, yq “ mint|S| : S is an x, y-separatoru and the (vertex) connectivity of G is κ “ κpGq “ min x,y κGpx, yq. The graph G is k-connected iffκpGq ě k. Some easy observations: ‚ κpGq “ 0 ð ñ G is not connected, or G “ K1. ‚ κpGq “ 1 ð ñ G is connected but has a cut-vertex. ‚ κpGq “ npGq ´ 1 ð ñ G – Kn. ‚ If H is a spanning subgraph of G, then every separator of G is also a separator of H, so κpHq ď κpGq. ‚ Loops and parallel edges do not affect connectivity. ‚ κpGq ď δpGq, because for every v P V pGq, the set Npvq is a separator. Note that v is an isolated vertex in G ´ Npvq. This even works if G “ Kn. As Diestel points out, it seems a little unnatural to define connectivity in this way. Since “connected” means that there is a path joining any two vertices, it seems as though “k-connected” ought to mean that there are k different paths between any two vertices. In fact “different” is not the right notion here: “disjoint” is. Definition 4.3. Let x, y P V pGq. A disjoint path family5 for x, y (for short, “x, y-DPF”) is a family of distinct x, y-paths P “ tP1, . . . , Pku such that no two of the Pi have a common vertex other than x and y. We will temporarily denote the maximum size of an x, y-DPF by λpx, yq. The left-hand graph in Figure 1(a) shows an x, y-DPF, and the three red vertices form an x, y-separator. In the right-hand graph (b), the path family shown is not a DPF, because some vertices belong to more than one path in the family. (It is true that no vertex other than x, y belongs to all of the paths, but that does not matter — in order to qualify as a DPF, no other vertex can belong to even two of the paths.) 5This terminology is not standard. West calls such a family “internally disjoint.” Diesel just says “disjoint.” Other sources use terms such as “independent.” 44 (b) x y y x (a) Figure 1. (a) A disjoint path family (DPF). (b) A path family that is not a DPF. In fact, the graph in Figure 1(b) does have an x, y-DPF of size 3, shown in Figure 2(a). The red circled vertices form a x, y-separator; note that there is a bijection between vertices in the separator and paths in the DPF. If the dashed edge were removed, then the largest x, y-DPF would have size 2 and there would be a separator of size 2, as in Figure 2(b). (a) y x y x (b) Figure 2. Maximum DPFs and minimum separators. In general, every x, y-separator must have cardinality at least λpx, yq. In other words, for each pair x, y there is a weak duality relation (7) maxt|P| : P is an x, y-DPFu ď mint|S| : S is an x, y-separatoru. Theorem 4.4 (Menger’s Theorem). Equality holds in (7) for all x, y; that is, κpx, yq “ λpx, yq. In particular, G is k-connected if and only if every pair of vertices admits a DPF of size at least k. These two assertions are sometimes called the local and global formulations of Menger’s theorem, respectively. One direction is clear: if G has a separator S of size k ´ 1, then no pair x, y of vertices separated by S has a DPF of size k, since every x, y-path has to use a vertex of S. The other direction is the interesting one. There are three proofs in Diestel, none of which we will do — we are going to derive it as a corollary of the more powerful Max-Flow/Min-Cut Theorem. Warning! Warning! Warning! Disjointness is a property of families of paths, not of individual paths. There is no such thing as a “disjoint path.” If you ever find yourself saying, “Let P be a disjoint path family; add more disjoint paths to P until we have a total of λpu, vq paths,” you have made a (very common) mistake. In particular, you cannot necessarily construct a maximal DPF greedily — that is, not every maximal DPF is maximum. For example, if P1 and P2 are the yellow and green paths in Figure 1(b), then every other 45 x, y-path has an internal vertex in common with at least one of P1 or P2 in an internal vertex, so P “ tP1, P2u is maximal, but it is not maximum because there is a larger x, y-DPF (Figure 2(a)). 4.2. Edge connectivity. Definition 4.5. Let G be a connected simple graph. The edge-connectivity is κ1 “ κ1pGq “ mint|F| : F Ď E, G ´ F not connectedu. Such a set F is called a disconnecting set. (This terminology is in West but not Diestel.) The graph G is k-edge-connected iffκ1pGq ě k. Some observations: ‚ κ1pGq ą 0 ð ñ G is connected ð ñ κpGq ą 0. ‚ κ1pGq ě 2 ð ñ G is connected and has no bridge ð ñ G is connected and is the union of cycles. ‚ κ1pGq is not affected by loops, but can be affected by parallel edges. E.g., if H is formed from G by cloning every edge, then κ1pHq “ 2κ1pGq. ‚ For every v P V , deleting the set Epvq of edges incident to v isolates v. Therefore, κ1pGq ď δpGq. ‚ κpGq and κ1pGq need not be equal. For example, the bowtie graph has κ “ 1 and κ1 “ 2. Theorem 4.6. Every simple graph G satisfies κ ď κ1 ď δ ă n. Moreover, given any integers κ ď κ1 ď δ ă n, there exists a simple graph with those parameters. The proof is left as an exercise. The following notation will be useful. IfX and Y are disjoint subsets of V pGq, then we can write rX, Y s “ rX, Y sG “ te P V pGq | e has one endpoint in X and its other endpoint in Y u. Also, if A is an edge set and x is a vertex, we write Apxq for the set of edges in A incident to x. E.g., Epxq “ rtxu, V ztxusG. Definition 4.7. A cut or edge cut is a set of the form rS, ¯ Ss, where H ‰ S Ĺ V and ¯ S “ V zS. The sets S and ¯ S are called the sides of the cut. Warning: Not every disconnecting set is a cut. For example, if G is any nontrivial connected graph then EpGq is certainly a disconnecting set, but EpGq can only be written in the form rS, ¯ Ss if G is bipartite. On the other hand. . . Proposition 4.8. Let F Ď EpGq. Then G ´ F is disconnected iffF contains a cut. Proof. If F Ě rS, ¯ Ss then G ´ F contains no path from S to ¯ S. OTOH, if G ´ F is disconnected then we can take S to be the vertex set of any component of G ´ F. □ This proposition seems trivial but is very useful. Given an edge set F that you want to show is a disconnecting set, it is typically easier and more natural to construct a vertex set S such that F Ě rS, ¯ Ss than it is to show directly that G ´ F 1 is not connected. 46 Remark 4.9. A cut can strictly contain another cut. For example, consider a 4-cycle with vertices labeled 1,2,3,4 in cyclic order. Let S “ t1, 2u and T “ t1, 3u. Then rT, ¯ Ts is the entire edge set and rS, ¯ Ss is not. More generally, if G is x, y-bipartite, then rX, Y s “ EpGq, which is certainly not a minimal cut unless G “ K2. Definition 4.10. A bond is a minimal cut. Proposition 4.11. Let G be connected and let F “ rS, ¯ Ss be a cut. Then F is a bond if and only if cpG ´ Fq “ 2 (i.e., G ´ F has exactly two components). Proof. If F is a bond then by definition G ´ F e is connected for every e P F, so cpG ´ Fq must equal 2. On the other hand, if G´F has exactly two components, then they must be G|S and G| ¯ S for some H ‰ S Ĺ V . So every e P F has one endpoint in each of S and ¯ S, so G ´ F e is connected. Therefore no proper subset of F is a disconnecting set, hence F is a bond. □ Bonds behave like cycles. A spanning tree is an edge set that contains no cycle and meets every bond (since if it were disjoint from some bond then it would be disconnected). This is equivalent to saying that the complement of a spanning tree contains no bond and meets every cycle. Proposition 4.12. Let C, C1 be cycles and let B, B1 be bonds. (1) C△C1 is the (edge-)disjoint union of cycles. (2) If C ‰ C1 are cycles in G and e P C X C1, then pC Y C1q ´ teu contains a cycle. (3) B△B1 is a cut. (4) If B, B1 are bonds in G and e P B X B1, then pB Y B1q ´ teu contains a bond. The proofs are left to the reader. They are both closely related to the exchange rules for spanning trees (if T, T 1 are spanning trees and e P T ´ T 1, then there is some edge e1 P T 1 ´ T so that T ´ e e1 is a spanning tree). Sneak preview: if G is planar, then there is a graph G˚ called the planar dual of G, such that cycles of G correspond to bonds of G˚, and vice versa. Definition 4.13. Let x, y P V pGq. An edge-disjoint path family for x, y (for short, “x, y-EPF”) is a family of distinct x, y-paths P “ tP1, . . . , Pku such that no two of the Pi have a common edge. We will temporarily denote the maximum size of an x, y-EPF by λ1px, yq. x d c a b y Every DPF is an EPF, but not vice versa. Therefore, λ1px, yq ě λpx, yq for all x, y. Note that every bond separating x and y must have cardinality at least λ1px, yq. In other words, we have a weak duality relation (8) maxt|P| : P is an x, y-EPFu ď mint|rS, ¯ Ss| : x P S, y R Su. Theorem 4.14 (Menger’s Theorem, edge version). Equality holds in (8) for all x, y, that is, κ1px, yq “ λpx, yq. In particular, G is k-edge-connected if and only if every pair of vertices admits a EPF of size at least k. 47 Again, the ð ù direction is easy, and the other direction is the hard one. We will soon show (in §4.5) that the edge version of Menger’s theorem is a consequence of vertex version. 4.3. The structure of 2-connected and 2-edge-connected graphs. There are several useful equivalent conditions for 2-connectivity. Definition 4.15. Let G be a connected graph. Let v, w P V pGq, possibly equal. Construct a new graph G1 from G by adding a new path v “ x0, e0, x1, . . . , xn´1, en´1, xn “ w. This path is called an ear. It is a closed ear if v “ w (in which case the trail is a cycle) and an open ear if v ‰ w (in which case the trail is a path). Theorem 4.16 (Characterization of 2-Connectivity). Let G be a simple connected graph with npGq ě 3. The following are equivalent: (A) G is 2-connected. (B) Every two vertices of G lie on a common cycle. (C) Every two edges lie on a common cycle. (D) G has an open ear decomposition G “ C ¨ YP1 ¨ Y ¨ ¨ ¨ ¨ YPk, where C is a cycle and each Pi is an open ear. 1 5 P P 2 3 P P P 4 C Sketch of proof. (A ð ñ B): This is a special case of Menger’s Theorem, which we will prove later. (D ù ñ A): The original cycle is 2-connected, and so is any graph obtained by adjoining ears to a 2-connected graph. (A/B ù ñ C): Let e, e1 be edges. If e, e1 share an endpoint, say e “ wx and e1 “ yz, then by 2-connectivity we can find an x, z-path P in G ´ y, and then e P e1 is a cycle. Otherwise, let e “ wx and e1 “ yz. Construct a new graph G1 by adding vertices s, t and edges sw, sx, ty, tz. This is equivalent to adding two open ears, so G1 is 2-connected. In particular G1 has two disjoint s, t-paths P, P 1. WLOG, P “ sw ¨ ¨ ¨ yt and P 1 “ sx ¨ ¨ ¨ zt. Then P P 1 ´ sw ´ sx ´ ty ´ tz e e1 is a cycle containing e and e1. 48 e’ y z x w P P’ t s e (C ù ñ D): If G is a graph satisfying (C), then the following algorithm produces an ear decomposition of G: Let G0 Ď G be any cycle. (So G0 is 2-connected.) Initialize i “ 1. During the ith step: ‚ Choose any edges e P EpGq ´ EpGiq and f P EpGiq. ‚ Let C be a cycle containing e and f. (Such a cycle must exist by condition (C).) ‚ Let Pi be the smallest path in C that contains no edges of Gi´1. ‚ This is the new ear; set Gi :“ Gi´1 Y Pi. ‚ Increment i. Repeat until EpGiq “ EpGq. f e G C i−1 Pi □ There is an analogous theorem for 2-edge-connected graphs. For the purpose of this statement, define a circuit to be a closed walk without repeated edges (but allowing repeated vertices). Theorem 4.17 (Characterization of 2-Edge-Connectivity). Let G be a connected graph with npGq ě 2. The following are equivalent: (A) G is 2-edge-connected. (B) Every two vertices u, v P V pGq lie on a common circuit. (C) G has an ear decomposition G “ C ¨ YP1 ¨ Y ¨ ¨ ¨ ¨ YPs, where C is a cycle and each Pi is an ear (either open or closed). (D) G has a strong orientation, i.e., it is possible to orient all edges so that every edge belongs to a directed cycle. The figure on the left is an example of a closed-ear decomposition. Note that P3 is a closed ear—it can be regarded as a closed path from v to v, where v is the indicated vertex of C Y P1 Y P2. Indeed, the graph is 2-edge-connected but not 2-connected. The figure on the right shows how to translate a closed-ear decomposition into a strong orientation (an example of the implication (C) ù ñ (D) of the theorem): just orient the original cycle consistently (which 49 can be done in one of two ways), and whenever you add an ear, orient it consistently (which again can be done in one of two ways). 5 P P 2 3 P P P 1 4 C 4.4. Counting strong orientations. Let G be a graph (with loops and multiple edges allowed). Let SpGq be the set of strong orientations of G, and let spGq “ |SpGq|. What can we say about this number? ‚ spGq ą 0 iffG is 2-edge-connected, by Thm. 4.17. ‚ spGq is always even (unless G “ K1), because reversing the direction of every edge preserves strong-ness. ‚ Cycles have two strong orientations (“clockwise” and “counterclockwise”). ‚ For consistency, this should still be true for the loop C1. In fact, adding a loop to G should double the value of spGq, since the loop itself can be oriented in one of two ways. ‚ The value of spKnq is not so obvious. Starting with n “ 1, the sequence begins 1, 0, 2, 24, 544, 22320, 1677488, . . . Theorem 4.18. The invariant s satisfies a deletion/contraction recurrence: spGq “ spG{eq spG ´ eq. Proof. Let D be a strong orientation of G, and suppose that there exists e P EpGq such that reversing e in D also gives a strong orientation. Then D ´ e is a strong orientation of G ´ e. On the other hand, each strong orientation of G ´ e certainly gives rise to a pair of strong orientations of G. So spG ´ eq “ #tD P SpGq | e is reversibleu 2 . Now let D1 be a strong orientation of G{e. Such a thing could come from a pair of strong orientations of G{e that are identical except for the orientation of e (so e is reversible), or from a single strong orientation in which e is not reversible. Therefore, spG{eq “ #tD P SpGq | e is reversibleu 2 #tD P SpGq | e is not reversibleu. Adding these two equations gives the desired recurrence. □ This should remind you of the deletion-contraction recurrence for τpGq. Stay tuned! 4.5. Menger implies edge-Menger. Definition 4.19. Let G be a simple graph. The line graph LpGq is defined by V pLpGqq “ EpGq, EpLpGqq “ tef : e, f have a common endpointu. 50 Note that the edge set of a (closed) trail in G corresponds to the vertex set of a path in LpGq, as in the below figure. 24 3 2 4 34 14 1 13 12 G L(G) Theorem 4.20. The vertex version of Menger’s Theorem implies the edge version. Proof. Assume that the vertex version of Menger’s Theorem holds. Let G be a graph, x, y P V pGq, xy R EpGq. Let G1 be the graph formed from G by adding vertices s, t and edges sx, yt. Construct the line graph LpG1q. t L(G’) G’ sx xd ad ay xc cd dy xa ab by yt x c d y a b s We are now going to apply vertex-Menger to LpG1q in order to deduce edge-Menger for G. First, observe that a set A Ď EpGq disconnects x from y in G if and only if the corresponding vertices separate sx from yt in LpG1q. Therefore, (9) κ1 Gpx, yq “ κLpG1qpsx, ytq. Second, the vertex version of Menger’s theorem implies that (10) κLpG1qpsx, ytq “ λLpG1qpsx, ytq. Third, observe that for each x, y-path P in G, there is a corresponding s, t-path P 1 “ sx P yt in G1 and an sx, yt-path ˆ P in LpG1q, and that P, Q are edge-disjoint if and only if ˆ P, ˆ Q are internally disjoint. Therefore, (11) λLpG1qpsx, ytq “ λ1 Gpx, yq and chaining (9), (10), and (11) together completes the proof. □ 51 4.6. Network flows. Definition 4.21. A network or s, t-network N “ pG, s, t, cq consists of a simple digraph G “ pV, Eq with two distinguished vertices s, t, called the source and sink respectively, and a capacity function c : E Ñ Ną0. We may assume that G is a simple digraph: it has no loops, and that for every x, y P V there is at most one edge of the form # » xy and at most one edge of the form # » yx. c(e) 2 1 3 3 2 3 2 2 5 3 3 3 2 4 2 4 t a p y z r b q s (Note: Diestel does this a bit differently; he starts with an undirected graph in which each edge e can be thought of as a pair of anti-parallel edges Ý Ñ e and Ð Ý e , each with a different capacity. This is an equivalent model, but it requires more complex notation.) We want to think of a network as modeling a situation where stuff(data, traffic, liquid, electrical current, etc.) is flowing from source s to sink t. The capacity of an edge is the amount of stuffthat can flow through it (or perhaps the amount of stuffper unit time). This is a very general model that can be specialized to describe cuts, connectivity, matchings and other things in directed and undirected graphs. A flow on N is a function f : E Ñ R that satisfies the constraints 0 ď fpeq ď cpeq @e P E (the capacity constraints), (12) finpvq “ foutpvq @v P V zts, tu (the conservation constraints), (13) where for v P V we define (14) finpvq “ ÿ e“ # » uv fpeq, foutpvq “ ÿ e“ # » vw fpeq. The function fpeq “ 0 is of course a flow. Here is a nontrivial example. I will consistently use blue for capacities and red for flows. Note that the conservation constraints say that flow cannot accumulate at any internal vertex. The value |f| of a flow f is the net flow into the sink: (15) |f| :“ finptq ´ foutptq “ foutpsq ´ finpsq. To see the second equality, note that ÿ ePEpGq fpeq “ ÿ vPEpGq finpvq “ ÿ vPV pGq foutpvq and by the conservation constraints, most of the summands cancel, leaving only finpsq finptq “ foutpsq foutptq 52 (1) (4) (3) (2) (3) (2) (5) (2) (2) (4) (3) (2) (3) (3) (2) (3) f(e) 2 c(e) 1 1 1 0 2 2 1 1 0 2 1 1 2 0 1 a p y z r b q s t Figure 3. A capacity function and a compatible flow. from which the second equality easily follows. Since we are concerned with maximizing |f|, we typically assume that s has no in-edges and t had no out-edges, so that (15) can be simplified to (16) |f| “ finptq “ foutpsq. The flow f shown in Figure 3 has |f| “ 3. Max-Flow Problem: Given a source-sink network pG, s, t, cq, find a flow of maximum value. We need a way of increasing the value of a given flow f, or showing that no such way exists. (This ought to remind you of the Augmenting Path Algorithm.) The naive way is to look for an “f-augmenting path”—an s, t-path P Ď N in which no edge of P is being used to its full capacity, that is, such that fpeq ă cpeq for all e P P. In this case, we can increase all flows along the path by some nonzero amount ε so as to preserve the conservation and capacity constraints, and increase the value of the flow by ε. However, there can be nonmaximum flows where no such path P exists. Consider the network shown in Figure 4. Continuing the analogy with matchings and the APA, the flow f on the left is “maximal”, in the sense that there does not exist any flow f 1 such that |f 1| ą |f| and f 1peq ě fpeq for every e P E. However, it is not maximum: |f| “ 1, while the flow g on the left has |g| “ 2. (1) (1) (1) (4) (2) (1) (1) (1) (1) (1) (1) (4) (2) (1) g(e) t s 1 1 1 1 0 0 1 f(e) s t 1 1 1 1 0 1 1 Figure 4. Two “maximal” flows, one with an augmenting path (highlighted). There is a more general way to increase flow: Allow the augmenting path P to contain edges that point in the wrong direction, but along which the flow is nonzero. As far as the conservation constraints and the value of the flow is concerned, decreasing “backward” flow is equivalent to increasing “forward” flow. The “forward” edges a, c are not being used to full capacity, and the “backward” edge b contains flow “in the 53 wrong direction”. So we can define a new flow ˜ f by ˜ fpaq “ fpaq 1, ˜ fpbq “ fpbq ´ 1, ˜ fpcq “ fpcq 1, ˜ fpeq “ fpeq for all other edges e. Then ˜ f satisfies the capacity and conservation constraints, and | ˜ f| “ |f| 1. (In fact ˜ f “ g.) Definition 4.22. Let f be a flow in an s, t-network N “ pG, s, t, cq. Let P be an s, t-path in G, which may include backward edges # » e as well as forward edges # » e P P. The tolerance of an edge e P P is defined as εpeq “ # cpeq ´ fpeq if # » e P P, fpeq if # » e P P, and the tolerance of the path P is εpPq “ min ePP εpeq. The path P is augmenting for f if εpPq ą 0. The proof of the following proposition is then completely routine. Proposition 4.23. If P is an f-augmenting path, then the function ˜ f defined by ˜ fpeq “ $ ’ & ’ % fpeq ε if # » e P P, fpeq ´ ε if # » e P P, fpeq otherwise, is a flow (i.e., it satisfies the capacity and conservation constraints), and |f 1| “ ε |f|. The dual problem to the Max-Flow problem is the Min-Cut problem. Definition 4.24. Let N “ pG, s, t, cq be an s, t-network. A source-sink cut is a directed cut of the form rS, Ts “ t# » xy P EpGq : x P S, y P Tu where V pGq “ S ¨ YT, s P S, and t P T. (Note that this is a directed graph, so we only include edges from S to T, not from T to S.) The capacity of the cut is cpS, Tq “ ÿ ePrS,T s cpeq. S T 3 2 5 3 3 3 2 4 2 4 3 3 1 2 2 2 b a p y z r q s t [S,T] In the figure above, S “ ts, a, b, p, qu (gold) and T “ ¯ S “ tt, r, y, zu (pink). The resulting source-sink cut is rS, Ts “ t#» br, # » py, #» qzu (highlighted in cyan), so cpS, Tq “ 2 1 2 “ 5. Note that the T, S-edges # » yq and #» rq are not considered part of the cut. 54 Min-Cut Problem: Find a source-sink cut of minimum capacity. A source-sink cut can be thought of as a bottleneck: a channel through which all flow must pass. Therefore, the capacity of any cut should be an upper bound for the maximum value of a flow — this is the “weak duality” inequality, analogous to the easy directions of results such as the K¨ onig-Egerv´ ary theorem and the various versions of Menger’s theorem. For a flow f and a vertex set A Ď V , define (17) fpA, ¯ Aq “ ÿ # » e PrA, ¯ As fpeq ´ ÿ ePr ¯ A,As fpeq. Proposition 4.25. Let f be a flow, and let A Ď V . Then: (18) fpA, ¯ Aq “ ÿ wPA pfoutpwq ´ finpwqq. In particular, if rS, Ts is a source-sink cut, then (19) fpS, Tq “ |f| ď cpS, Tq. That is, the Max-Flow and Min-Cut problems are weakly dual. Proof. Add zero to fpA, ¯ Aq and be careful about the bookkeeping: fpA, ¯ Aq “ ¨ ˝ ÿ # » e PrA,As fpeq ÿ # » e PrA, ¯ As fpeq ˛ ‚´ ¨ ˝ ÿ # » e PrA,As fpeq ÿ # » e Pr ¯ A,As fpeq ˛ ‚ “ ÿ e“ # » vw: vPA fpeq ´ ÿ e“ # » vw: wPA fpeq “ ÿ wPA ¨ ˝ ÿ e: headpeq“w fpeq ´ ÿ e: tailpeq“w fpeq ˛ ‚ “ ÿ wPA pfoutpwq ´ finpwqq. establishing (18). In particular, if rS, Ts is a source-sink cut and f is any flow, then fpS, Tq “ ÿ wPS foutpwq ´ finpwq “ foutpsq “ |f| but on the other hand fpS, Tq ď cpS, Tq by the capacity constraints (12). □ 4.7. The Ford-Fulkerson algorithm. In fact, the Max-Flow and Min-Cut problems are strongly dual. They can be solved simultaneously in finite time by the following simple but very powerful algorithm, due to Ford and Fulkerson. 55 The Ford-Fulkerson Algorithm Input: a network N “ pG, s, t, cq Output: a maximum flow f and minimum s, t-cut rS, Ts Initialize fpeq “ 0 @e. Repeat: Let S be the set of all vertices reachable from s by an f-augmenting path If t P S (“breakthrough”), then increase flow along some augmenting path until breakthrough does not occur. Return the flow f and the cut rS, ¯ Ss. Theorem 4.26 (The Max-Flow/Min-Cut Theorem — “MFMC”). The Ford-Fulkerson algorithm finishes in finite time and computes a maximum flow and a minimum cut. Proof. Since everything in sight is an integer, each instance of breakthrough increases |f| by at least 1. Therefore, the algorithm will terminate in a number of steps equal to or less than the minimum capacity of an s, t-cut. Let f and rS, Ts be the output of the FFA. The fact that breakthrough did not occur means that every forward edge of rS, Ts is being used to full capacity, and no backward edge has positive flow. That is, fpeq “ cpeq @ # » e P rS, Ts and fpeq “ 0 @ # » e P rS, Ts. But this says exactly that |f| “ fpS, Tq “ cpS, Tq and so by weak duality, f is a maximum flow and rS, Ts is a minimum source-sink cut. □ Example 4.27. Let N be the network shown. (57) (25) (40) (30) (27) (64) (49) (20) (24) (42) s b d c t a Initialize f to be the zero flow and work through the algorithm. Note that there may be several possible augmenting paths in each iteration, so in that sense the algorithm is not deterministic. Step 1: Augmenting path: P “ s, a, d, c, t Edge tolerances: εpsaq “ 64, εpadq “ 30, εpdcq “ 24, εpctq “ 25 Path tolerance: mint64, 30, 24, 25u “ 24 56 (57) (25) (40) (30) (27) (64) (49) (20) (24) (42) |f| = 24 24 24 0 0 0 24 0 24 0 0 b d c t a s Step 2: Augmenting path: P “ s, b, d, t Tolerance: εpPq “ mint27, 20, 57u “ 20 (57) (25) (40) (30) (27) (64) (49) (20) (24) (42) 24 0 0 24 20 |f| = 44 20 24 20 0 24 b d c t a s Step 3: Augmenting path: P “ s, a, c, d, t. Note that #» dc P E, so we have a backward edge. Tolerance: εpPq “ mint40, 49, 24, 37u “ 24. Note that εpÐ Ý cdq “ fp#» dcq “ 24. New flow: Add 24 to fp#» saq, fp#» acq, fp#» dtq; subtract 24 from fp#» dcq. (30) (42) (57) (25) (40) (27) (24) (20) (49) (64) 0 0 20 24 |f| = 68 48 20 0 24 24 44 s b d c t a Step 4: Augmenting path: P “ s, a, d, t Tolerance: εpPq “ mint16, 6, 13u “ 6 57 (30) (42) (57) (25) (40) (24) (20) (49) (64) (27) 0 0 20 24 |f| = 74 54 20 0 30 24 50 s b d c t a Step 5: Augmenting path: P “ s, a, c, t Tolerance: εpPq “ mint64 ´ 54, 49 ´ 24, 25 ´ 24u “ 1 (30) (42) (57) (25) (40) (24) (20) (49) (64) (27) 25 20 0 |f| = 75 55 20 0 30 25 0 50 s b d c t a Step 6: At this point in the algorithm, breakthrough fails, since every edge in the cut “ S “ ts, a, b, cu, T “ td, tu ‰ is either a forward edge being used at capacity (yellow), or a backward edge with flow 0 (blue). (40) (27) (64) (20) (42) (49) (24) (57) (25) (30) 30 |f| = 75 25 20 0 25 50 0 0 20 55 s b d c t a Moreover, cpS, Tq “ cp# » adq cp#» bdq cp# » ctq “ 30 20 25 “ 75 “ |f|. So f is a maximum flow and rS, Ts is a minimum cut. The algorithm has succeeded!! Note that the algorithm still works if the capacities are required only to be rational, rather than integers. It is really the same problem — since the network has finitely many edges, simply multiply every capacity by the greatest common denominator to convert the problem to an integer one. If the capacities are required to be 58 real, then the Max-Flow and Min-Cut problems are still strongly dual (in the sense of linear programming), but the Ford-Fulkerson algorithm may not terminate. Example 4.28. This example is taken from V. Chv´ atal, Linear Programming (W.H. Freeman & Co., 1983), pp. 387–388. Let N “ pG, s, t, cq be the network shown below, with capacity function cp# » uyq “ cp# » xwq “ p´1 ? 5q{2, cp# » vwq “ 1, cp# » e q “ 10100 for all other e P EpGq. t u v w x y z s Here the success of the Ford-Fulkerson algorithm depends on the augmenting paths chosen. If the algorithm is unlucky, the sequence of augmenting paths used to increase flow might be P1 “ suyvwt, P2 “ syuvwzt, P3 “ suyxwt, P4 “ syuxwvt, P5 “ suyzwt, P6 “ syuzwxt, P1, P2, P3, . . . resulting in an infinite loop. (I haven’t worked out the details.) 4.8. Corollaries of the Max-Flow/Min-Cut Theorem. Henceforth, we assume that all capacity func-tions and flows are integers. Proposition 4.29 (Acyclic flows). Every network N has a maximum flow f that is acyclic in the sense that for every directed cycle C Ď N, there is at least one edge of C with fpeq “ 0. Proof. More generally, any flow f can be “acyclized” as follows. If C is a directed cycle with fpeq ą 0 for every e P C, then ε “ mintfpeq : e P Cu ą 0, and we can define a new flow ˜ f by ˜ fpeq “ # fpeq ´ ε e P C, fpeq e R C. Then ˜ f satisfies the capacity and conservation constraints and | ˜ f| “ |f|. Moreover, Moreover, ˜ f has at least one more edge with zero flow than f did, so if we repeat this construction we will eventually produce an acyclic flow. □ Proposition 4.30 (Partitionability of flows). Every acyclic integer flow f can be “partitioned into paths.” That is, there is a family of directed s, t-paths P “ tP1, . . . , Pku such that k “ |f| and fpeq “ #ti | e P Piu. Warning: There is certainly no guarantee that P is a DPF or EPF as defined above in §4.1 and §4.2. In fact, typically it will be neither. Proof. The following algorithm constructs such a family: ‚ Initialize P :“ H. ‚ Start walking from s along edges of positive flow until you reach t. 59 ‚ Put P into P, and reduce flow by 1 along every edge of P. ‚ Repeat until |f| “ 0. Since f is acyclic, every walk P is a path. Also, the conservation constraints imply that we never get “stuck” when forming P — after entering an internal vertex of the network, it is always possible to leave. Finally, each time we put a new path into P, the value |f| decreases by 1. □ The warning above applies to networks in general. On the other hand, if we are clever about choosing the capacity function, we will automatically obtain additional constraints on the path family obtained by partitioning. For example, let D be a digraph with s, t P V pDq. Treat D as a unit-capacity network, i.e., cpeq “ 1 for every e P E. Then ‚ P is a (directed) s, t-EPF, and ‚ cpS, Tq “ |rS, Ts| for any source-sink cut rS, Ts. Therefore, applying the MFMC in this context, we obtain the directed edge version of Menger’s theorem: maxt|P| : P is a directed s, t-EPFu “ mint|rS, Ts| : rS, Ts an s, t-cutu “ κ1ps, tq. This is a common technique to prove min/max theorems in graph theory: transform an arbitrary graph into a source/sink network in some clever way, then apply MFMC to the network to recover the desired result for your original graph. The undirected vertex version of Menger’s theorem requires a slightly more elaborate transformation. Theorem 4.31 (Menger’s Theorem, finally!). Let G be a simple graph, and let s, t P V pGq be nonadja-cent. Then maxt|P| : P is an x, y-DPFu ď mint|S| : S is an x, y-separatoru. Proof. Construct a s, t-network N by splitting every internal vertex v into an in-vertex v´ and an out-vertex v. That is, V pNq :“ ts, tu Y tv´, v | v P V pGqzts, tuu, EpNq :“ t # » v´v: v P V pGqzts, tuu p“private”; capacity = 1q Y t # » vw´ : vw P EpGqu p“public”; capacity = 8q. c − d− c− b− a + d+ b+ a+ s t a b c d N G t s For every v P V pNqzts, tu, we have foutpv´q “ finpvq P t0, 1u. 60 Let f be a feasible integer acyclic flow of N, partitioned into paths P 1 1, . . . , P 1 k, where k “ |f|. Each P 1 i has the form s, v´ 1 , v 1 , . . . , v´ ℓ, vℓ, t. The path P 1 i corresponds to the s, t-path Pi in G given by s, v1, . . . , vn, t, and this correspondence is bijective. Moreover, each pair tv´, vu can occur in at most one P 1 i (because cpv´vq “ 1), so each v P V pGqzts, tu belongs to at most one Pi — that is, the family P “ tP1, . . . , Pku is a DPF! So integer acyclic flows in N correspond exactly to s, t-DPF’s in G, and max |f| “ λGps, tq. Meanwhile, the set of private edges is a s, t-cut of capacity npGq ´ 2. On the other hand, any edge set containing at least one public edge has infinite6 capacity. Therefore, every minimum source-sink cut consists only of private edges, and the corresponding vertices of G form an s, t-cut. Therefore λGps, tq “ max |f| “ min cpS, Tq “ κGps, tq. □ Other applications of MFMC, many of which would make good projects, include the Gale-Ryser Theorem (characterizing degree sequences of a bipartite graph) and score vectors of tournaments (what in-/out- degree sequences can arise in an orientation of Kn?) Related problems and generalizations include networks with multiple sinks and sources; networks in which different vertices have different supplies and demands for flow; and cost networks (where each edge has a cost per unit flow, and the problem is to find a feasible flow of fixed value and minimum cost). 4.9. Path covers and Dilworth’s theorem. A digraph D is called transitive if, whenever # » uv and # » vw are edges, then so is # » uw. An acyclic transitive digraph is essentially the same thing as a partially ordered set (or poset): the edge # » uv is regarded as recording the relation u ă v, and acyclicity and transitivity say that u ă v and v ă w ù ñ u ă w and it cannot be the case that both u ă v and u ą v. When drawing a transitive digraph, it’s enough to specify a set of edges whose transitive closure contains all the other edges. For example, a transitive digraph whose edges include those shown on the left below must in fact contain all the edges shown on the right — but the right-hand picture is disgusting, so it’s easier to show the left-hand picture and just remember that it’s supposed to be transitive. 6If this bothers you, just define the capacity of the public edges large enough, say, oh, I don’t know, p73n 86q!, so that the sentence following the footnote is true. 61 A path cover in a transitive digraph is a collection P of directed paths that partition the vertex set (see figure on left, below — note that the bottom edge of the big green path is not shown, but it is in the transitive closure of the edges shown). An independent set is a collection I of vertices such that no two lie on any directed path (see figure on right, below). In the language of posets, a path is a chain (a set of elements such that every two are comparable) and an independent set is a antichain (a set of elements such that no two are comparable). Theorem 4.32 (Dilworth’s Theorem). In every acyclic digraph D, the minimum size of a path cover equals the maximum size of an independent set. The proof is left as an exercise. As usual, the weak duality relation, namely |P| ě |I| for every path cover P and independent set I, is the easy part. The strong duality relation is an application of the K¨ onig-Egerv´ ary Theorem (and thus ultimately of the Max-Flow/Min-Cut Theorem). There is an amazing generalization of Dilworth’s Theorem due to Greene and Kleitman, but it is somewhat beyond the scope of graph theory — take algebraic combinatorics! 62 5. Coloring 5.1. The basics. Let G “ pV, Eq be a simple graph and n “ npGq. Recall that the notation rks means the set of positive integers t1, 2, . . . , ku. Definition 5.1. A [proper] coloring of G is a function c : V Ñ S such that cpvq ‰ cpwq whenever v, w are adjacent. Typically S “ rks “ t1, 2, . . . , ku or S “ N. The elements of S are called colors. The color class corresponding to a color i is the sets c´1piq def “ tv P V | cpvq “ iu. Note that the color classes are cocliques and each vertex belongs to exactly one of them. The chromatic number χ “ χpGq is the smallest k such that there is a coloring c : G Ñ rks. The graph G is k-colorable if χpGq ď k. Example: Let G be the Petersen graph. It is impossible to color G properly using only two colors. On the other hand, three colors suffice, as in the following figure, so χpGq “ 3. The statement “The Petersen graph is k-colorable” is true if and only if k ě 3. Some observations: (1) A coloring of G is the same thing as a partition of V pGq into cocliques, so χpGq is the minimum number of cocliques needed to partition V pGq. (2) χpGq “ 1 iffEpGq “ H. (3) χpGq “ 2 iffG is bipartite. (Note that the Petersen graph isn’t.) (4) If G contains parallel edges, then we can ignore them — a proper coloring of G is the same thing as a proper coloring of its underlying simple graph. OTOH, if G contains loops, then it is impossible to properly color it! So when studying coloring, it is typically OK to assume that any graph of interest is simple. (5) χpKnq “ n, because a coloring of Kn must assign all vertices different colors. On the other hand, if G is a simple graph on n vertices that is not Kn, then χpGq ă n. (6) If G is planar—that is, it is possible to draw it in the plane so that no two edges cross—then χpGq ď 4. This is the notorious four-color theorem. On the other hand, it is much easier to prove the five-color theorem that χpGq ď 5 for all planar G; we will do this eventually. Proposition 5.2. Let α, χ, ω be the coclique, chromatic, and clique numbers. Then χ ě ω and χ ě n{α. (Here ω is the clique number ω “ ωpGq is the number of vertices in the largest clique in G. Recall that α is the number of vertices in the largest coclique, so ωpGq “ αp ¯ Gq.) Proof. If Q is a clique, then every proper coloring must assign different colors to each vertex in Q, hence must use at least |Q| colors. If Q is a maximum clique, we have χ ě |Q| “ ω. 63 On the other hand, each color class in a proper coloring is a coclique, hence has size ď α. Therefore, if G is k-colorable, then kα ě n, which gives the second inequality. □ Equality need not hold. For example, if G is an odd cycle, then χpGq “ 3 but ωpGq “ 2. In general, computing χpGq is hard — in fact, NP-complete. It can be reduced to the problem of computing the coclique number of the Cartesian product of G with a clique (Prop. 5.1.11). 5.2. Greedy coloring. Here is a simple way to produce a proper coloring c : EpGq Ñ Ną0. Choose an ordering v1, . . . , vn of the vertices and assign colors to the vertices in that order, coloring each vertex with the smallest available color. That is, cpviq :“ min pNą0ztcpvjq : 1 ď j ă i, vivj P EpGquq . That is, color the vertices one by one, assigning each vertex v the cheapest color available (i.e., the smallest number that has not already been assigned to a neighbor of v). This algorithm might produce an optimal coloring. On the other hand, it might not. For instance, let G “ P4, so χpGq “ 2 (remember, all trees are bipartite). Ordering the vertices left to right produces a 2-coloring, but coloring the two endpoints first doesn’t: 1 4 3 2 1 1 1 1 1 1 1 2 2 oops 1 2 3 4 1 1 1 1 1 1 2 2 2 2 Every graph has at least one ordering for which greedy coloring yields an optimal coloring (i.e., one that uses only χ colors and no more). Unfortunately, finding such an ordering is harder than finding the coloring itself. If G “ Kn then all orderings of course produce an optimal coloring; less trivially, the same property holds for the star K1,n and for odd cycles. I don’t know if there is a characterization of all graphs that have this property. (Maybe threshold graphs?) Proposition 5.3. χ ď ∆1. Proof. Choose any ordering v1, . . . , vn of V and use the greedy algorithm to construct a proper coloring f. Then f will be at worst a p∆1)-coloring, because at each step in the algorithm, some color in t1, 2, . . . , ∆1u will always be available. □ This bound can be strengthened. Let di “ dGpviq. In the greedy coloring algorithm, the ith vertex has no more than minpi ´ 1, diq earlier neighbors, so it is assigned a color no more than 1 minpi ´ 1, diq. 64 Therefore, (20) χ ď 1 max iPrns ␣ minpi ´ 1, diq ( . This bound on χ is tightest if we order the vertices so that d1 ě d2 ě ¨ ¨ ¨ ě dn. In other words, greedy coloring is likely to work better if we assign colors to higher-degree vertices first. (Intuitively, it makes sense to color the hardest-to-color vertices when there are more colors available.) Warning: Ordering by degree is not guaranteed to produce an optimal coloring; it’s just more more likely to do so. In other words, it is a heuristic. (Exercise, possibly coming soon to a problem set near you: Construct a graph in which for every ordering of the vertices v1, . . . , vn such that the greedy algorithm yields an optimal coloring, there are some indices i ă j such that dpviq ą dpvjq.) In fact, the easy bound χ ď ∆1 can be improved. Brooks’ Theorem says that If G is a connected simple graph other than a clique or an odd cycle, then χpGq ď ∆. We won’t prove this (exercise?), but the idea is to cleverly construct an ordering of the vertices such that each vertex is preceded by at most ∆´ 1 of its neighbors, so that greedy coloring will use at most ∆colors. 5.3. Alternative formulations of coloring. An edge-coloring is a function c : EpGq Ñ N such that cpeq ‰ cpe1q whenever e, e1 have a common endpoint. In other words, it is a coloring of the line graph LpGq (see Definition 4.19). The minimum number of colors needed for an edge-coloring is called the chromatic index of G, denoted χ1pGq; thus χ1pGq “ χpLpGqq. The chromatic index is much easier to pin down than the chromatic number: for instance, χ1 “ ∆for bipartite graphs (a theorem of K¨ onig’s from 1916) and χ1 P t∆, ∆1u for all graphs (Vizing’s Theorem). I have chosen not to cover this material in depth, but it could make a good final project. Suppose that each vertex v P G has an associated list Sv of colors that it is allowed to have. A list-coloring of G is then a coloring c such that cpvq P Sv for every v P V pGq? Of course, this depends on the lists. A graph is k-list-colorable, or k-choosable, if for every family of lists pSvqvPV pGq there exists a compatible list-coloring. Analogously, an edge-list-coloring of G is a list-coloring of LpGq. Thus we can define chpGq “ mintk : G is k-choosableu, ch1pGq “ mintk : LpGq is k-choosableu. It is easy to see that chpGq ě χpGq (just set all the lists equal). Equality does not hold: for example, K3,3 is 2-colorable but not 2-choosable. The List-Coloring Conjecture says that ch1pGq “ χ; pGq for all graphs G. This is a major open problem, but one of the best-known theorems in this area is due to Fred Galvin, Emeritus Professor of Mathematics at the University of Kansas, who proved that the List-Coloring Conjecture is true if G is bipartite [J. Combin. Theory Ser. B 63 (1995), no. 1, 153–158]. Again, this would be an excellent final project! 5.4. The chromatic polynomial (not in Diestel). Definition 5.4. The chromatic polynomial of G is pGpkq “ number of k-colorings of G. Some easy examples: 65 G pGpkq Kn kn Kn kpk ´ 1qpk ´ 2q . . . pk ´ n 1q G1 G2 pG1pkq ¨ pG2pkq has a vertex v of degree 1 pk ´ 1qpG´vpkq Tree with n vertices pk ´ 1qn´1pK1pkq “ kpk ´ 1qn´1 The chromatic polynomial is a (much) stronger invariant than the chromatic number. I.e., pGpkq determines χpGq — specifically, χpGq “ mintk : pGpkq ą 0u — but not vice versa. We have not yet proved that pGpkq is a polynomial function of k for all G, but we will do so shortly (justifying its name). But first some examples. Example 5.5. If we are both clever and lucky about the order in which to color the vertices of G, then the number of colors available for each vertex will not depend on the previous choices. For example, let G be the graph below, with V pGq “ t1, 2, 3, 4, 5, 6u. 2 5 3 4 1 6 Construct a proper coloring f by choosing colors cp1q, . . . , cp6q in that order. We have: ‚ k choices for cp1q; ‚ k ´ 1 choices for cp2q [can’t be cp1q]; ‚ k ´ 2 choices for cp3q [can’t be cp1q or cp2q — note that those two must be different]; ‚ k ´ 3 choices for cp4q [can’t be cp1q, cp2q, or cp3q — again, those must all be different]; ‚ k ´ 2 choices for cp5q [can’t be cp1q or cp2q]; ‚ k ´ 2 choices for cp6q [can’t be cp1q or cp5q]. Therefore, pGpkq “ kpk ´ 1qpk ´ 2qpk ´ 3qpk ´ 2qpk ´ 2q “ kpk ´ 1qpk ´ 2q3pk ´ 3q. What made this calculation so easy was the following property: For every v P V , the induced subgraph on Npvq X t1, 2, . . . , v ´ 1u is a clique. An ordering of vertices with this property is called a simplicial ordering. (Alternative terms abound: simplicial elimination ordering, perfect elimination ordering, etc.) Theorem 5.6. Let G be a simple graph. The following conditions are equivalent: (1) G has a simplicial elimination ordering. (2) G has no induced cycle of length ě 4. Equivalently, every cycle in G of length 4 or more has a chord (an edge between two vertices that are not adjacent in the cycle) (3) Either G is a clique, or G “ G1 Y G2, where G1 and G2 are chordal induced subgraphs and G1 X G2 is a clique. 66 Graphs that satisfy these conditions are called chordal graphs for the reason of condition (2), and are a very interesting and important family. The implication p1q ù ñ p2q is not too had but p2q ù ñ p1q is trickier. The equivalence of (2) and (3) is known as Dirac’s theorem. The chromatic polynomials of chordal graphs always split into linear factors — using greedy coloring with the reverse of a simplicial ordering, as in Example 5.5 Example 5.7. If we don’t have an SEO, then it’s harder to calculate the chromatic polynomial, because the number of colors available for each vertex will not depend on the previous choices. Let G “ C4. We will try to determine pGpkq. Start by choosing the colors of two opposite vertices v, x. The problem is that since vx R E, we don’t know whether or not cpvq “ cpxq, and so we have two different possibilities for the number of colors available for the other vertices. w x w x y v y v                 ‚ If cpvq “ cpxq (left), then there are k choices for cpvq “ cpxq, and k ´ 1 independent choices for each of cpuq, cpwq. ‚ If cpvq ‰ cpxq (right), then there are kpk ´ 1q independent choices for cpvq and cpxq, and k ´ 2 independent choices for each of cpuq, cpwq. Therefore, the chromatic polynomial of C4 is kpk ´ 1q2 kpk ´ 1qpk ´ 2q2 “ kpk ´ 1qpk2 ´ 3k 3q. Alternately, we could have colored v, then w, then x, but then we’d still have to worry about whether or not cpvq “ cpxq. This is a relatively simple case; for bigger graphs, these calculations can get much uglier, with cases, subcases and subsubcases galore. (Similar tedium arises if G is chordal but the ordering of vertices is not an SEO.) Since you asked, there do exist non-chordal graphs whose chromatic polynomials split. The smallest example is the graph obtained by taking K6 and subdividing one edge, as shown below. 5.5. The chromatic recurrence. A more systematic (though still exponentially difficult) way to calculate pGpkq is by the following recurrence. Theorem 5.8 (Chromatic Recurrence). For every e P EpGq, pGpkq “ pG´epkq ´ pG{epkq. Proof. Let v, w be the endpoints of e. First, every proper coloring f of G is certainly a proper coloring of G ´ e (as deleting an edge will not turn a proper coloring improper). 67 OTOH, a proper coloring f of G ´ e is a proper coloring of G if and only if cpvq ‰ cpwq. Therefore, pG´epkq ´ pGpkq “ #tproper k-colorings f of G ´ e with cpvq “ cpwqu. But if cpvq “ cpwq, then f corresponds to a proper coloring of G{e — just color the merged vertex x “ vw by the color cpvq “ cpwq. Conversely, a proper coloring of G{e can be “expanded” to a proper coloring of G with cpvq “ cpwq “ cpxq. Therefore, the right-hand side of the previous equation is just pG{epkq, and we’re done. □ v w G v w G−e w v w G−e v G/e e Corollary 5.9. pGpkq is a polynomial in k. This is immediate from the chromatic recurrence, by induction on the number of edges and the base case pKnpkq “ kn. Example 5.10. Let G “ C4. For each edge e, G ´ e – P4 (a tree with 3 edges) and G{e – K3, so pC4 “ pP4 ´ pK3 “ kpk ´ 1q3 ´ kpk ´ 1qpk ´ 2q “ kpk ´ 1q pk ´ 1q2 ´ pk ´ 2q ˘ “ kpk ´ 1qpk2 ´ 3k 3q confirming the earlier calculation. More generally, there is a reasonably nice formula for pCn in terms of n (exercise!). ‚ One very nice feature of the chromatic recurrence is that the edge e can be chosen arbitrarily. This is counterintuitive, but is actually characteristic of all deletion-contraction recurrences. ‚ If a contraction produces parallel edges, then we can remove all but one member of each parallel class; this doesn’t affect the chromatic polynomial. ‚ If G has lots of edges, it may be more convenient to run the algorithm backwards. That is, if v, w are nonadjacent vertices of G, then the chromatic recurrence gives pGpkq “ pGvwpkq pG{vwpkq where G vw is formed by adding an edge between v and w and G{vw is obtained by fusing v and w into a single vertex. 68 Example 5.11. Let G be the “near-complete graph” consisting of Kn with a single edge e “ xy removed. Then pG “ pGpkq “ pGe pG{e “ pKn pKn´1 “ kpk ´ 1qpk ´ 2q ¨ ¨ ¨ pk ´ n 1q kpk ´ 1qpk ´ 2q ¨ ¨ ¨ pk ´ n 2q “ kpk ´ 1qpk ´ 2q ¨ ¨ ¨ pk ´ n 3qpk ´ n 2q2. Another consequence is the following fact about the chromatic polynomial of any graph: Theorem 5.12. Let G be a simple graph with n “ npGq and r “ epGq. (1) The coefficients of pGpkq alternate in sign. (2) pGpkq “ kn ´ rkn´1lower-order terms. I.e., G is monic of degree n, and the second-leading coefficient tells you the number of edges. So these two (very basic) invariants of a graph are determined by its chromatic polynomial. Proof. By induction on r. Base case: If r “ 0, then pGpkq “ kn and the claims hold trivially. Inductive step: Suppose r ą 0. Pick an edge e P EpGq. By induction, the theorem is true for both G ´ e and G{e. That is, pG´epkq “ n ÿ i“0 p´1qiaikn´i, pG{epkq “ n´1 ÿ i“0 p´1qibikn´1´i, where the ai and bi are nonnegative integers with (21) a0 “ b0 “ 1, a1 “ b1 “ r ´ 1. By the chromatic recurrence, pGpkq “ pG´epkq ´ pG{epkq “ ˜ n ÿ i“0 p´1qiaikn´i ¸ ´ ˜n´1 ÿ i“0 p´1qibikn´pi1q ¸ “ ˜ n ÿ j“0 p´1qjajkn´j ¸ ´ ˜ n ÿ j“1 p´1qj´1bj´1kn´j ¸ “ kn n ÿ j“1 p´1qjpaj bj´1qkn´j. This polynomial is evidently monic of degree n, and alternates in sign. Also, the next-to-leading coefficient (i.e., on kn´1) is a1 b0 “ r “ epGq by (21). □ 5.6. Colorings and acyclic orientations. An orientation of a graph G is called acyclic if it contains no directed cycle. Let apGq be the number of acyclic orientations. For example: ‚ If G is a forest with e edges, then all 2e orientations of G are acyclic, so apGq “ 2e.. ‚ If G “ Cn, then all but two of its orientations are acyclic, so apCnq “ 2n ´ 2. 69 ‚ If G “ Kn, then it can be proven that every acyclic orientation corresponds to a total ordering of the vertices in which each edge points from the smaller to the greater vertex. In particular, apKnq “ n!. Theorem 5.13 (Stanley, 1973). apGq “ p´1qnpGqpGp´1q. Example: If G is a forest with n vertices and c components, then p´1qnpGp´1q “ p´1qnp´1qcp´1 ´ 1qn´c “ 2n´c “ 2r. Example: If G “ Kn then p´1qnpKnp´1q “ p´1qn śn´1 i“0 p´1 ´ iq “ śn´1 i“0 pi 1q “ n!. Sketch of proof. Let ˜ apGq “ p´1qnpGp´1q. If epGq “ 0, then pG “ kn, so ˜ apGq “ 1 is indeed the number of acyclic orientations. Otherwise, induct on epGq. The chromatic recurrence implies that ˜ apGq “ ˜ apG´eq˜ apG{eq for every edge e. In fact, apGq satisfies the same recurrence. To prove this, classify the acyclic orientations of G according to whether or not the orientation of e can be reversed, and relate them to acyclic orientations of G ´ e and of G{e. The details are left to the reader — compare the proof of Theorem 4.18. □ Actually, Stanley proved something more general, namely a combinatorial interpretation for the numbers pGp´kq for all k P N (not just k “ 1). We now have seen four ostensibly unrelated graph invariants that satisfy deletion-contraction recurrences: the number of spanning trees, the number of strong orientations, the chromatic polynomial, and the number of acyclic orientations. What’s behind all this? The connection between colorings and acyclic orientations is actually quite deep. Given a coloring c : V pGq Ñ N, there is a natural way to define an acyclic orientation: simply orient each edge in the direction of increasing color. (Do you see why this is acyclic?) Conversely, given an acyclic orientation, one can study the class of all colorings that give rise to it. This connection can be explained geometrically. Given a simple graph G on vertex set rns, we can associate each edge ij P G with a hyperplane in Rn, namely Hij “ tx “ px1, . . . , xnq P Rn | xi “ xju. The set AG “ tHij | ij P EpGqu is called the graphic hyperplane arrangement of G. It partitions Rn into a number of regions. What is a coloring c : V pGq Ñ N? It is just a point in pRnzAGq X Nn. Indeed, any integer point x in Rn can be regarded as a function c : V pGq Ñ N sending i to xi, and the condition that x R AG says precisely that the function is a coloring. Meanwhile, the acyclic orientations correspond precisely to the regions of AG. If e “ ij is any edge, then which side of the hyperplane Hij a point x is on tells you which one of xi or xj is bigger, hence gives an acyclic orientation of e. Moreover, if D is any orientation, then the system of linear inequalities txi ă xj | # » ij P D has its solution space equal to a region of AG if D is acyclic, and empty otherwise. This is just the tip of the iceberg of the theory of hyperplane arrangements. For much more on this connection, see, e.g., various works by Matthias Beck. 70 5.7. Perfect graphs. We have seen that χpGq ď ωpGq in general, and that equality need not hold; for example, if G “ Cn with n ě 5 odd, then χpGq “ 3 and ωpGq “ 2. When does equality hold? In a sense this is not the right question to ask. Consider the graph obtained by identifying copies of C3 and C5 along an edge. This graph has χ “ ω “ 3 (the 3-coloring shown on the right is optimal), but it still contains an induced 5-cycle. So while we’ve fixed the problem with C5 that χ ą ω, we’ve sort of cheated. This leads Definition 5.14. A graph G is perfect if χpHq “ ωpHq for every induced subgraph H Ď G. This is a very well-studied class of graphs. It is not hard to prove that chordal graphs (see Theorem 5.6 above) are perfect. The first major theorem about perfect graphs, first proved by L. Lov´ asz [J. Comb. Theory Ser. B 13 (1972), 95–98] is as follows: Theorem 5.15 (Perfect Graph Theorem). G is perfect if and only if ¯ G is perfect. Lov´ asz’s proof is widely admired, and would make an excellent final project. A major recent advance was the characterization of perfect graphs in terms of excluded induced subgraphs, conjectured by Berge in 1963 and proven by M. Chudnovsky, N. Robertson, P. Seymour and R. Thomas [Ann. Math. (2) 164 (2006), no. 1, 51–229]: Theorem 5.16 (Strong Perfect Graph Theorem). G is perfect iffit contains no induced subgraph isomorphic to Cn or Cn for any n ě 5. The Strong Perfect Graph Theorem implies the Perfect Graph Theorem, since the condition of the former is clearly preserved under complementation. (This would decidedly not make a good final project — note the length of the paper. But there is probably a more manageable summary of the proof in some other source.) Instead of giving Lov´ asz’s original proof of the PGT, I will follow Diestel in giving a really slick linear algebra proof, due to G. Gasparian [Combinatorica 16 (1996), 209–212], of a result stronger than the Perfect Graph Theorem. Theorem 5.17. G is perfect if and only if npHq ď αpHqωpHq for all induced subgraphs H Ď G. Note that this condition is self-complementary (because complementation swaps α and ω), so this result implies Theorem 5.15. Proof. As usual, one direction (in this case ù ñ ) is easy and one ( ð ù ) is hard. Suppose that G is perfect. Let H Ď G and let c be an optimal coloring of H. Then the number of color classes is χpHq “ ωpHq, and each color class is a coclique in H, hence has size at most αpHq. It follows that npHq ď αpHqωpHq. For the converse, let G be a graph of smallest possible order such that G is not perfect but (22) npHq ď αpHqωpHq for all induced H Ď G. Let α “ αpGq and ω “ ωpGq. 71 First, we claim that for every nonempty coclique A Ď V we have (23) χpG ´ Aq “ ωpG ´ Aq “ ω. The first equality follows because G ´ A Ĺ G is perfect. For the second equality, clearly ωpG ´ Aq ď ωpGq, and if χpG ´ Aq ă ωpGq then χpGq ď ωpGq (since any pk ´ 1q-coloring of G ´ A can be extended to a k-coloring of G by assigning color k to all vertices in the coclique A). But that would imply that G is perfect, which we have assumed it isn’t. Now, we cook up a big batch of cliques and cocliques. Define A0 “ tu1, . . . , uαu “ some maximum coclique in G, A1, . . . , Aω “ color classes of some ω-coloring c1 of G ´ u1 (note that these are all cocliques), Aω1, . . . , A2ω “ color classes of some ω-coloring c2 of G ´ u2, . . . Apα´1qω1, . . . , Aαω “ color classes of some ω-coloring cα of G ´ uα. Every Ai is a coclique, so by (23) there is a clique Ki of size ω in G ´ Ai. Claim 1: For every ω-clique K Ď V pGq, we have K X Ai “ H for exactly one i P t0, . . . , αωu. To see this, first note that if K X A0 “ H, then for every i, K is a clique in G ´ ui, hence meets every color class of ci. OTOH, if K X A0 “ tuiu for some i (which must be unique), then by the same argument K meets every color class of cj for j ‰ i, and meets all but one of the color classes of ci. Now, we define some matrices. Let θ “ αω 1. Matrix Size Description J θ ˆ θ 0 on main diagonal; 1 elsewhere A θ ˆ n Rows are incidence vectors of A0, . . . , Aαω B n ˆ θ Columns are incidence vectors of K0, . . . , Kαω Claim 2: AB “ J. To see this, note that the dot product of the ith row of A with the jth column of B is just the cardinality of Ai X Kj. For i “ j, that intersection is empty by the construction of Kj, but then Claim 1 implies that Kj X Ai ‰ H for every i ‰ j, and the intersection can’t have size more than 1 because Ai is a coclique and Kj is a clique, so it is exactly 1. On the other hand, J is nonsingular (details left to the reader7). In other words, rank J “ θ ď minprank A, rank Bq ď minpθ, nq which implies that θ “ αω 1 ď n. But this contradicts (22) for H “ G. So all this is impossible, and we are done. □ 7Hint: J is diagonalizable, and it has the same eigenspaces as the Laplacian of Kθ. 72 6. Planarity and Topological Graph Theory Here’s a classical brain teaser. Three houses are to be linked to three utilities (water, gas and electricity, let’s say). No two links can cross. How can this be done? In graph theory terms, this is the problem of drawing K3,3 in the plane without any two edges crossing. In fact, this is impossible, as we will show. Here are some related questions we will try to get at. ‚ Which graphs are planar, i.e., can be drawn in the plane without any edge crossings? ‚ What properties do planar graphs have? (The Four-Color Theorem is one of the most famous.) ‚ What can we say about drawings of graphs on surfaces other than the plane (e.g., the torus)? 6.1. Plane graphs, planar graphs, and Euler’s formula. Up until now, we’ve defined graphs purely combinatorially. That is, vertices and edges have simply been elements of abstract sets, not points and curves in space. We have insisted that the structure of a graph is independent of the way it is drawn. Now we are going to try to understand graphs by studying how they can be drawn. Definition 6.1. A plane curve is the image of a continuous function φ : r0, 1s Ñ R2. (It doesn’t hurt to impose stronger conditions than continuity: e.g., φ is differentiable, or of class C8, or piecewise linear.) The curve is closed if φp0q “ φp1q. It is simple if φ is injective (except possibly φp0q “ φp1q). Definition 6.2. A plane graph is a pair Γ “ pV, Eq, where V consists of a finite set of points in R2, and E consists of a finite set of simple curves such that (1) each curve has two endpoints in V (which can be the same)8; (2) no curve meets any point in V other than its endpoints; (3) two curves can only meet at points in V . I will consistently use Γ for geometric plane graphs and G for combinatorial graphs. A plane graph Γ defines a graph G in an obvious way, so we can treat Γ as a graph and speak of its cycles, bridges, loops, colorings, etc. We say that Γ is a (plane) drawing of G. A combinatorial invariant of a plane graph Γ is one that depends only on G; that is, it is the same for all other drawings of G. For instance, the number of vertices, edges and components are clearly combinatorial invariants. Definition 6.3. A graph G is planar if it has a plane drawing. Proposition 6.4. Every subgraph of a planar graph is planar. In addition, planarity is unchanged by adding or removing loops, or edges parallel to another edge. Proof. For the first assertion, if H Ď G then every drawing of G gives rise to a drawing of H by simply erasing the vertices and edges not in H. For the second assertion, we can add a loop to a drawing by inserting a very small closed curve at a vertex, and we can add a parallel an edge by cloning the corresponding curve and wiggling it slightly. (Technically these arguments require some ϵ ´ δ-type analysis, but making them precise in this way is not worth the trouble.) □ One fact from topology we will need is the Jordan Curve Theorem (more properly the Jordan-Sch¨ onflies Theorem), which asserts that every simple closed plane curve partitions R2 into two pieces.9 More precisely, if C is any simple closed curve, then R2zC has two path-connected components.10 8Diestel requires implicitly that the two endpoints of each curve must be different, and explicitly that different curves have different pairs of endpoints. In other words, he wants his graphs to be simple. I don’t think we need those restrictions. 9This seemingly obvious fact is surprisingly hard to prove in general (although it is not so bad for, say, piecewise-C8 curves). 10Two what? For any space X, you can define a relation on its points as follows: x „ y if x, y are the endpoints of some curve φ : r0, 1s Ñ X. This is an equivalence relation (check this!), and its equivalence classes are called the path-connected components of X. 73 Definition 6.5. Let Γ “ pV, Eq be a plane graph. The path-connected components of R2zΓ are called the faces of Γ. (Technically this is an abuse of notation — we should write R2z Ť ePE rather than R2zΓ — but the latter is much simpler and its meaning is clear.) We write F or FpΓq for the set of faces, and f or fpΓq for the number of faces. For instance, the following plane drawing has 12 faces. Faces 1, . . . , 11 are bounded, while face 12 is the unbounded face. 1 2 3 9 4 5 7 6 10 11 12 8 As we will see soon, the number of faces does not depend on the particular drawing; it is actually a combi-natorial invariant. Moreover, there is nothing special about the unbounded face as opposed to the bounded ones. Here is why. When you add a point at infinity to R2, you get a topological sphere. (This may be easier to visualize in reverse. Start with a sphere and poke a hole in it by deleting one point. Then you can flatten out the punctured sphere into a plane, without any more cutting, tearing or pasting.) So we can regard any plane graph as living on a sphere, where all the faces are bounded. If we now pick any face f, poke a hole in its interior, and flatten things out, we obtain a drawing of the same graph in which f is now the unbounded face. Each face f has a boundary Bf (Diestel uses the term “frontier”), which can be represented as a closed walk in G. The length of f is ℓpfq “ |Bf|. The following diagram shows Bf5 (red) and Bf8 (blue); notice that ℓpf5q “ 5 and ℓpf8q “ 3. 1 2 3 9 4 5 7 6 10 11 12 8 Each edge lies in the boundaries of exactly two faces (a consequence of the Jordan Curve Theorem), so we immediately obtain a useful analogue of the Handshaking Theorem. Proposition 6.6. For any plane graph Γ, we have ÿ fPF pΓq ℓpfq “ 2epΓq. Remark 6.7. Bridges have to be treated specially, since the same face lies on both sides of a bridge (in fact, this property characterizes bridges). Accordingly, if f is a face lying on both sides of a bridge e, we regard Bf as containing two copies of e. For example, the bounded face f shown below has length ℓpfq “ 5. 74 f e f Note that this convention is consistent with Prop. 6.6. (Recall that for the original version of handshaking, we decided that a loop incident to a vertex should contribute 2 to its degree. This is the dual statement.) Theorem 6.8. Let Γ be a planar graph. Then fpΓq “ epΓq ´ npΓq cpΓq 1. In particular, the number of faces is a combinatorial invariant. Proof. First, if Γ is acyclic, then f “ 1 and the desired equation reduces to e´nc “ 0, which is equivalent to Proposition 1.24. Let H be a maximal acyclic subgraph of Γ (so that |H| “ n ´ c), and let e R H. Then e is not a bridge; moreover, recall that there is a unique cycle Ce in H Y teu (formed by e together with the unique path in H between the endpoints of e; if e is a loop then that cycle is just Ce “ teu). If we delete e, then the two faces bordering e merge into a single face, so the total number of faces will drop by one. If we keep deleting edges, we will eventually get an acyclic graph, so fpΓq ´ |EpΓqzH| “ 1 and since |H| “ n ´ c, so f ´ pe ´ pn ´ cqq “ 1, or f “ e ´ n c 1 as desired. (Essentially, this argument is induction on e ´ n c.) Alternately, we could contract the edges in H one by one. Each contraction leaves the number of regions unchanged. We eventually get a graph with EpΓqzH edges, all of which are loops, hence e ´ |H| 1 regions. Since |H| “ n ´ c, we get the same result. □ Corollary 6.9 (Euler’s formula). For every connected plane graph (hence every connected planar graph), we have f “ e ´ n 2. More generally, f ě e ´ n 2 for all planar graphs, with equality if and only if G is connected. Remark 6.10. The two plane graphs shown below are combinatorially isomorphic, and in both cases the sum of the face lengths is 16 (twice the number of edges), but the faces themselves have different lengths. So while f is a combinatorial invariant, the multiset of face lengths is not. 5 5 4 3 4 3 3 5 75 6.2. Applications of Euler’s formula. Combining Euler’s formula with the handshaking and length-sum formulas gives very useful upper bounds for the number of edges in a planar graph, and lets us immediately show that certain graphs cannot be planar (in particular solving the houses-and-utilities brainteaser). Recall that the girth of a graph G is the length of the smallest cycle in it (or 8 if G is acyclic). For example, a graph is simple if and only if its girth is at least 3. Theorem 6.11. Let G be a simple, non-acyclic planar graph with girth g ě 3. Then (24) e ď ˆ g g ´ 2 ˙ pn ´ 2q. Proof. Let Γ be a drawing of G. Each face of Γ has length ě g, so the length-sum formula (Prop. 6.6) gives 2e “ ÿ sPF pΓq ℓpsq ě gf ě gpe ´ n 2q where the second inequality comes from Corollary 6.9. Now solving for e gives the desired inequality. □ Corollary 6.12. (1) If G is simple and planar, then e ď 3n ´ 6. (2) If G is simple, bipartite, and planar, then e ď 2n ´ 4. (3) K5 and K3,3 are nonplanar. Proof. (1) Substitute g “ 3 into (24). (2) Substitute g “ 4 into (24). (3) K5 has pn, eq “ p5, 10q, hence cannot be planar by (1). K3,3 is bipartite and has pn, eq “ p6, 9q, hence cannot be planar by (2). □ By Proposition 6.4, no graph that contains K5 or K3,3 as a subgraph can be planar. The converse of this statement is false: the Petersen graph has no K5- or K3,3-subgraph, but it has pn, e, gq “ p10, 15, 5q, which does not satisfy (24), so it is not planar. (More generally, there exist nonplanar graphs of every girth g ă 8.) On the other hand, these bounds can be used to characterize graphs that are maximally planar, i.e., adding any single edge produces a non-planar graph. For example, K5 ´ e is clearly of this form. Proposition 6.13. A graph G on n ě 3 vertices is maximally planar iffit is planar and e “ 3n ´ 6. Proof. Clearly, maximally planar implies connected. In addition, G is maximally planar if and only if every face has length 3 (since adding an edge to a plane drawing must join two vertices in a common face of length ě 4). Now, e “ 3n ´ 6 “ 3pn ´ 2q ð ñ e “ 3pe ´ fq (by Euler’s formula) ð ñ 2e “ 3f ð ñ ℓpsq “ 3 @s P F (by the length-sum formula). □ Shortly, we will show that K5 and K3,3 are, in a precise sense, the only minimal obstructions to planarity. The first step is to define precisely what “minimal obstruction” means. 6.3. Minors and topological minors. We have already worked with contraction of edges. For the time being, we are going to redefine G{e by removing any loops or parallel edges11 created by the contraction, so that the operation keeps us in the world of simple graphs. 11I.e., remove all but one element of each parallel class of edges. 76 Definition 6.14. Let V1, ¨ ¨ ¨ , Vk be a partition of V pGq into nonempty, pairwise-disjoint subsets, each of which induces a connected subgraph. The corresponding contraction is the graph X “ G{V with vertices V pXq “ rks, and an edge from i to j whenever rVi, Vjs Ă EpGq is nonempty. (This is the same as successively contracting the edges of a maximal forest in GV1 ¨ ¨ ¨ GVk.) This version of contraction always produces a simple graph X. Note that contracting a single edge e “ xy corresponds to contracting with respect to the partition in which tx, yu is the only non-singleton block. For example, consider this picture, taken from Diestel, 2nd ed., p.19. Here G is the graph in the middle, and the gray blobs are the blocks Vi, which are called the branch sets. If we don’t want to specify the partition V, we can just say that “G is an MX” or “G “ MX”. Strictly speaking, it would be most correct to regard MX as the set of all graphs from which X can be obtained by contraction, and write G P MX, but the notation G “ MX seems common. (Mnemonically, you might think of the symbol M as standing for “magnification,” as suggested by Andrei Elliott.) Here’s another example, which shows that the Petersen graph is an MK5. Definition 6.15. A graph X is a minor of a graph Y if Y has a subgraph that is an MX. Equivalently, X can be obtained from Y by some sequence of vertex deletions, edge deletions, and edge contractions. (Note that we can perform these operations in any order — we don’t have to save all the contractions for the end.) Proposition 6.16. If G is planar, then every minor of G is planar. Equivalently, if X is not planar then no MX is planar. Deletion of edges and vertices clearly preserves planarity. The harder case is contraction, which boils down to a topological statement: if B is a closed, connected, simply connected, bounded region in R2 (such as a simple curve), then the space obtained by formally squashing B down to a single point is also topologically equivalent to R2 (the technical word for this equivalence is homeomorphism). Again, I omit the details but they might be fun for the analysis-minded to work out an explicit homeomorphism, at least for the case that B is a line segment. We next describe a more restricted notion of the minor relation. Subdividing an edge e with endpoints x, y means inventing a new vertex z and replacing e with two new edges xz, yz. A subdivision of a graph X is 77 any graph G that can be obtained by subdividing edges in sequence. In this case, every vertex v of X is also a vertex of G, with dGpvq “ dXpvq. These are called the branch vertices of the subdivision. The other vertices all have degree 2, and are called subdividing vertices (the hollow vertices in the figure below). If G is a subdivision of X, then we say that “G is a TX” or “G “ TX.” So all three graphs above are TK4’s. Mnemonically, the symbol T stands for “topological subdivision.” (In fact subdivision is a homeomorphism.) Subdividing an edge or a set of edges does not affect whether or not a graph is planar (this should be easy to see). In particular, every TK5 and every TK3,3 is nonplanar, and any graph that contains such a subdivision is nonplanar. Definition 6.17. A graph X is a topological minor of a graph Y if Y has a subgraph that is a TX. As a simple example, if n ď m, then Cn is a minor of Cm (because contracting any m ´ n edges in Cm produces Cn) as well as a topological minor (because subdividing any sequence of m´n edges in Cn produces Cm). The graph K5 is a minor of the Petersen graph, but not a topological minor, because there is no way to subdivide edges of K5 to produce a subgraph of the Petersen graph. Every subdivision can be undone by a contraction, so every TX is also an MX. The converse is not true in general — for example, contracting an edge in Kn produces Kn´1, but subdividing an edge in Kn´1 does not produce Kn. On the other hand, if x P V pGq has degree 2, then contracting one of the edges incident to x is a reversible process — subdivide the other edge incident to x. G / e e f f G x Proposition 6.18. Let G, H be graphs. (1) If H is a topological minor of G, then H is a minor of G. (2) If H is a minor of G and ∆pHq ď 3, then H is a topological minor of G. Proof. (1) follows from the statement that every TX is an MX. For (2), suppose that H is a minor of G, so that it is obtained from a subgraph of G by a sequence of edge contractions A Ñ A{e, where e “ xy. Contracting an edge incident to a vertex of degree 1 is the same thing as deleting that vertex, so we can assume that every time we contract an edge e “ xy, its endpoints each have degree at least 2. Also, we can assume that x, y have no common neighbor z, because in that case A{xy “ pA ´ xzq{xy (since contracting xy causes xz and yz to become parallel). Therefore, dA{epxyq “ dApxq dApyq ´ 2 (since its neighbors are pNpxqztyuq Y pNpyqztxuq), and since ∆pHq ď 3, at least one of dpxq, dpyq must be exactly 2. By the preceding discussion, all these contractions are reversible by subdivision, so H is a topological minor of G. □ 78 Remark 6.19. Here is another fact about topological minors and trees. Consider taking a tree Y and repeatedly contracting every edge incident to a vertex of degree 2. These are all reversible, so Y “ TX for every tree produced in this way. Each contraction deletes a single 2 from the degree sequence. So ultimately, Y is a TX for some tree X with the same number of leaves as Y , and no vertices of degree 2. For each fixed number ℓof leaves, there are only finitely many possibilities — for instance, you can convince yourself using handshaking that the number of non-leaf vertices of X is at most ℓ´ 2. 6.4. Kuratowski’s Theorem. The main theorem about planar graphs is the following: Theorem 6.20 (Kuratowski’s Theorem, 1930). A graph is planar if and only if it does not contain K5 or K3,3 as a topological minor. A closely related result is the following: Theorem 6.21 (Wagner’s Theorem, 1937). A graph is planar if and only if it does not contain K5 or K3,3 as a minor. The ù ñ directions of both theorems easily follow from what has come before. The ð ù direction is the hard part. Since every topological minor is a minor, Kuratowski’s Theorem is a stronger result on its face than Wagner’s Theorem (convince yourself that the implication goes the right way). The first step is to show that actually the two theorems are equivalent. Lemma 6.22 (Diestel, Lemma 4.4.2). A graph G contains K5 or K3,3 as a minor if and only if it contains one of them as a topological minor. Proof. The ð ù direction follows from Prop. 6.18 (1). For the ù ñ direction, if G has a K3,3 minor then it is a topological minor by Proposition 6.18 (2), so we are reduced to proving the following statement: If G has a K5 minor then it has either a K5 topological minor or a K3,3 minor. Let K be a minimal subgraph of G that is an MK5, and let V1, . . . , V5 be the branch sets of K. Minimality has a number of consequences: (1) Each subgraph Gi :“ K|Vi must be a tree — it is connected by the definition of contraction, and it has to be acyclic, otherwise we could remove a non-bridge and obtain a smaller MK5. (2) The only possible leaves of Gi are the vertices with neighbors in other branch sets (since again, any other leaf could be removed) (3) K has exactly one edge eij between each two branch sets Vi, Vj. Thus, for each i, the tree Ti :“ Gi teij : j ‰ iu has exactly four leaves, one in each Vj for j ‰ i. By Remark 6.19, Ti is a TY for some tree Y with four leaves and no degree-2 vertices. There are only two possibilities for Y , namely K1,4 and the tree H shown below. If Ti is a TK1,4 for every i, then K is a TK5 — contract each Ti down to a K1,4 and we get a TK5: specifically, we get a K5 in which each edge has been subdivided at most twice. In this figure, the degree-4 vertices of the K1,4’s are colored blue, and each Ti consists of a blue vertex and its four neighbors. (In particular, two Ti’s can overlap.) 79 Otherwise, if Ti “ TH for some i, then contracting Vi onto the two non-leaves of H and contracting all other vertices to a single point gives a K3,3, as shown below. So in this case K contains an MK3,3. (The coloring indicates the partite sets of K3,3, and the edges of H are highlighted. The dashed edges are part of K but not part of the MK3, 3 it contains.) □ Of course, we have not proved Kuratowski’s Theorem at this point — we have just shown that it is equivalent to Wagner’s Theorem. For short, let us call either a TK5 or a TK3,3 a TKK. The strategy at this point is going to be to as follows: (1) Show that Wagner’s Theorem holds for 3-connected graphs, i.e., that every 3-connected graph with no K5 or K3,3 minor is planar. (Actually we’ll show something stronger, due to Tutte: every such a graph has a drawing in which every face is bounded by a convex polygon.) This is Lemma 6.23 and Prop. 6.24. (2) Show that every minimal counterexample to Wagner’s theorem must in fact be 3-connected, and therefore by part (1) cannot exist. 80 Lemma 6.23 (Diestel, Lemma 3.2.1). Suppose that G is 3-connected and |G| ą 4. Then G has some edge e such that G{e is 3-connected. Proof. Suppose not. Then for every xy P EpGq, the graph G{xy has a separator S of size 2. This separator must consist of the contracted vertex vxy and one other vertex, say z. (Otherwise, it would be a separator in G, but κpGq ě 3.) It follows that tx, y, zu is a separator in G. This separator is clearly minimal, so each of x, y, z has a neighbor in every component C of G ´ tx, y, zu (because deleting the other two vertices of T leaves G connected). Choose x, y, z, C so that C is as small as possible, and let v P NCpzq. Note for later use that this implies that (25) NGpvq Ď C Y tx, y, zu. By the same logic as before, G{zv is not 3-connected, so there is a vertex w such that tz, v, wu is a separator, and again each of z, v, w has a neighbor in every component of G ´ tz, v, wu. Let D be a component of G ´ tz, v, wu that contains neither x nor y. (Such a component must exist because x and y are adjacent.) Then NGpvq X D Ď C by (25). Also, D is a component of G ´ tx, y, z, v, wu, hence contained in some component of G ´ tx, y, zu, which must be C. On the other hand, v P CzD, so D Ĺ C, and this contradicts the choice of C. □ Proposition 6.24. Every 3-connected graph G with no K5 or K3,3 minor is planar. Proof. We will actually show something stronger (due to Tutte): every such graph has a good embedding — a drawing in which every face is a convex polygon and no three vertices are collinear. We induct on n “ npGq, which must be at least 4. If n “ 4, then G “ K4, which has a good embedding. Now suppose that n ą 4. By Lemma 6.23, G has an edge, say e “ xy, such that H “ G{e is 3-connected. Every minor of H is a minor of G, so H has no K5 or K3,3 minor. By induction, H has a good embedding. Let z be the fused vertex, and let X “ NGpxqztyu and Y “ NGpyqztxu, so that NHpzq “ X Y Y . The plane graph H ´ z has a face containing the point z (namely, the face formed by merging all faces of H whose boundary contains z). Let C be the boundary of this face; then NHpzq Ď C. z N(z) C (By the way, if I have accidentally made three vertices collinear in any of these figures, I don’t want to hear about it.) 81 Let u1, . . . , uℓbe the vertices of C, listed in cyclic order. If that list looks like u “ u1 loooomoooon PX , u2, . . . , uk´1 loooooooomoooooooon PXzY , v “ uk loooomoooon PX , uk1, . . . , uℓ loooooooomoooooooon PY zX with 1 ă k ă ℓ, then we can construct a good embedding of G by putting x where z is, and pushing y a little bit into the wedge formed by u1 and uk (bounded by the dotted lines in the example). u v z y x C X Y What if C does not have this form? There are two possibilities. Case 1: There are three vertices u, v, w P Npxq X Npyq. That is, G contains edges xu, xv, xw; yu, yv, yw; and xy. Meanwhile, C can be partitioned into three paths between each two of u, v, w. All this together forms a TK5 with branch vertices u, v, w, x, y, a contradiction. u z v w v u w x y Case 2: a, b, c, d are vertices of C, listed in cyclic order, with a, c P Npxq and b, d P Npyq. Then G contains edges ax, cx, yb, yd, and yx, and C can be partitioned into ab-, cb-, cd-, and ad-paths. All this together forms a TK3,3. 82 c a a x c b d y b d z Again this is a contradiction. So Cases 1 and 2 are both impossible, completing the proof. □ Now we still reduce the general case to the 3-connected case. For this I will follow West’s presentation (pp.247–248). Definition 6.25. Let S Ă V pGq. An S-lobe is a subgraph of the form GrV pHq Y Ss, where H is a component of G ´ S. Lemma 6.26. Let G be a minimal nonplanar graph. Then G is 2-connected. Proof. Certainly G is connected (otherwise it would have some nonplanar component, which would contradict minimality). Suppose that it has a cut-vertex x, with lobes G1, . . . , Gk (remember that a lobe is an induced subgraph of the form GrV pHq xs, where H is a component of G ´ x). If every lobe is planar, then we can draw Gi with x on the outer face (say at the origin), squeeze it so that it fits in a “pie slice” of angle ă 2π{k, and attach all the lobes together to get a plane drawing of G. □ For example: G 4 G 3 G 2 x 1 G x x x x Lemma 6.27. Suppose that G is nonplanar and that S “ tx, yu is a vertex cut of G. Then adding the edge xy to some S-lobe produces a nonplanar graph. Proof. Let G1, . . . , Gk be the S-lobes of G, and let Hi “ Gi xy. Suppose that every Hi is planar: then we can show by induction on k that G is planar. If k “ 1 this is trivial (because Hi Ą G). Otherwise, given a 83 plane drawing of H1 “ H1 Y ¨ ¨ ¨ Y Hi´1, we can take a drawing of Hi with xy on the outer face and shrink it to fit in one of the faces of H1 bounded by xy. For example: x 1 G 2 G 3 G 1 G 2 y x x y G 3 y G Therefore, if G is nonplanar, one of the Hi must be nonplanar. □ Proposition 6.28. Suppose that there is a nonplanar graph with no TKK as a subgraph. Let G be such a graph with the fewest edges among all such graphs. Then G is 3-connected. Proof. Deleting an edge certainly cannot create a TKK, so the minimality assumption implies that G ´ e is planar for every e P EpGq. Therefore, by Lemma 6.2.5, G is 2-connected. Now suppose that G is not 3-connected. Let S “ tx, yu be a separator. Then epH xyq ă epGq for every S-lobe H. (This is clear if xy P EpGq; otherwise, every other S-lobe has at least two edges.) By Lemma 6.2.6, there is some S-lobe H such that H xy is nonplanar, and the minimality assumption implies that H xy has a TKK, say K. Let P be a path in G from x to y which avoids every other vertex of H (for instance, we can choose P to be a path through a different S-lobe H1). Then H P is a TKK in G, which is a contradiction. □ y H’ H K x x y P Fact: Whether or not a graph is planar can be determined efficiently—in fact, in Opnq time. Algorithms for planarity testing would be a good subject for an end-of-semester project. 6.5. The Five-Color Theorem. Theorem 6.29 (The Five-Color Theorem). If G is planar, then χpGq ď 5. It is easy to prove that every planar graph is 6-colorable. The bound n ď 3n ´ 6 for planar graphs says in particular that δpGq ď 5 (since the sum of all degrees is at most 6n´12 ď 6n). Choose an ordering in which the last vertex vn has degree ď 5, delete it, and now do the same thing for G ´ vn (which is also planar), and so on recursively. Greedy coloring using this order will not use more than 6 colors. (Equivalently, apply (20).) Meanwhile, it is true, but very difficult to prove, that every planar graph is 4-colorable — this is arguably the most famous theorem in graph theory and was open for over a century, with a long, colorful12 12Sorry, couldn’t help it. 84 story of failed attempts to prove it. The five-color theorem, on the other hand, is not trivial, but it is doable by us mortals. First, we prove a key lemma. Lemma 6.30. Let G be a plane graph and x P V pGq with degpxq ě 4. Let a, b, c, d be neighbors of G, listed in cyclic order around G (there can be other neighbors between them). Then every a, c-path in G ´ x crosses every b, d-path. Proof. Let C be the cycle bounding the union of the regions containing x. Thus C is the disjoint union of an a, b-path, a b, c-path, a c, d-path, and a d, a-path. If P, P 1 are a, c- and b, d-paths that do not cross, then neither of these paths cross C (since they can be taken to stay outside its interior), so P P 1 C forms a TK4. Inverting the picture so that x is on the unbounded face, we obtain an outerplanar drawing of a TK4, which cannot exist by a homework problem. (Note: This statement can also be proved from scratch, as Diestel does, by showing that P Y txu is a cycle that separates b from d.) □ Proof of the Five-Color Theorem. Every plane graph G has epGq ď 3npGq ´ 6, so the sum of vertex degrees is at most 6n ´ 12, and it follows that G has at least one vertex of degree ă 5. Let G be a minimal plane graph that is not 5-colorable. Then δpGq ě 5, for if degpxq “ 4 then G ´ x is 5-colorable by minimality, and we can construct a 5-coloring of G by assigning x some color not assigned to any of its neighbors. Combined with the previous observation, we see that G has some vertex of degree exactly 5. Call this vertex x and let its neighbors be a, b, c, d, e in cyclic order in the plane drawing, colored red, orange, yellow, green, blue respectively. I claim that there does not exist both (i) an a, c-path consisting entirely of red and yellow vertices, and (ii) a b, d-path consisting entirely of orange and green vertices. If both existed, then they would have to cross, but since G is planar that crossing point would have to be a vertex, and the two paths have no vertices in common. WLOG, suppose that a path of form (i) does not exist. Another way of saying this is that a and c are in different components of the induced subgraph of G ´ x on the red and yellow vertices. If we toggle red and yellow in the component containing a, then we will still have a proper 5-coloring, but a will now be yellow and c will remain hello. So we can then color x red and obtain a proper 5-coloring of G. □ 6.6. Planar duality. Definition 6.31. Let Γ be a plane graph. The plane dual of Γ is the plane graph Γ˚ with vertices V pΓ˚q “ FpΓq, with an edge of Γ˚ drawn across each edge of Γ. That is, for each edge e that separates two (possibly equal) faces f, f 1 P FpΓq, there is a dual edge e˚ “ ff 1 P EpΓ˚q. 85 Γ ∗ Γ By definition, npG˚q “ fpGq and epG˚q “ epGq. It follows from Euler’s formula that fpG˚q “ npGq. This can also be seen directly: every face of G˚ encloses exactly one vertex of G. Notice that if edge a is a loop, then a˚ is a bridge, and if b is a loop, then b˚ is a bridge: ∗ a a b Γ Γ b Warning: The plane dual is not a combinatorial invariant without further assumptions on G. For example, if G has one vertex and three loops, then the dual of a “flower” drawing of G is K3,1, while the dual of a “Hawaiian earring” drawing of G is P4. 86 On the other hand, if G satisfies certain additional hypotheses (for example, if G is 3-connected), then all of its embeddings have isomorphic duals. In this case it is legitimate to speak of the planar dual of the graph G. In addition, if Γ is a connected plane graph then pΓ˚q˚ – Γ. In the following figure, the double duals are shown in blue. Theorem 6.32. An edge set A Ď EpΓq is a cycle if and only if the dual edge set A˚ is a bond in EpΓ˚q. Proof. Suppose that A contains a cycle C. Then A˚ Ě rS, ¯ Ss, where S (resp. ¯ S) is the set of faces inside C (resp. outside C). Meanwhile, the minimal sets A containing cycles are just cycles themselves. So A is a cycle if and only if it is a minimal disconnecting set—that is, a bond. On the other hand, if A is acyclic, then it encloses no region, so Γ˚ ´ A˚ is connected and A˚ contains no cut. □ Corollary 6.33. e is a bridge in Γ if and only if e˚ is a loop in Γ˚. (This is one justification for the use of the word “coloop” as a synonym for “bridge”.) Theorem 6.34. Let Γ be a plane graph. TFAE: (1) Γ is bipartite. (2) Every cycle in Γ has even length. (3) Every face of Γ has even length. (4) Every vertex of Γ˚ has even degree. (5) Γ˚ is Eulerian. Proof. The implications p1q ð ñ p2q ù ñ p3q ð ñ p4q ð ñ p5q are clear from the definitions, or from things we have already proved. So we need only show (3) ù ñ (2). Suppose that (2) fails. Let C be an odd cycle. Let q1, . . . , qr be the faces lying inside C. Let E1 “ EpCq, E2 “ te P EpΓq | e lies inside Cu, E3 “ te P EpΓq | e lies outside Cu. Then EpΓq “ E1 ¨ YE2 ¨ YE3. Each edge of E1 (resp. E2, E3) borders one (resp. two, zero) of the faces qi. Therefore, n ÿ i“1 ℓpqiq “ |E1| 2|E2| 87 is odd. Therefore ℓpqiq is odd for at least one face qi. □ 6.7. The genus of a graph. We can’t embed K3,3 in the plane (or, equivalently, the sphere), so what if we build a bridge to avoid crossings? Essentially, we are adding a “handle” to the sphere to get a torus. This enables us to embed more graphs than can be embedded in the plane: for example, K3,3. This picture is awkward, but there’s a nicer way to draw pictures on the torus. To construct a torus, we could take a sheet of paper, glue the top and bottom edges together, and glue the left and right sides together. The topological diagram for this gluing is as shown on the left: w u w u w w v v 88 The arrows indicate the orientations of the edges when they are glued together. (Reversing one of the arrows would produce a Klein bottle instead of a torus.) Thus the two red dots represent the same point v. So do the two green dots (v) and the four blue ones (w). That means that we can represent toroidal embeddings of graphs by drawing them on a square. For example, here’s an embedding of K3,3: a x b y b y a x c z And here’s a toroidal K7: 1 7 7 7 7 2 3 4 5 6 Definition 6.35. The g-holed torus S1 is the surface obtained by adding g handles to the sphere.                                            S S S2 4 1 89 Definition 6.36. The genus of a graph G is γpGq “ mintg | G embeds on Sgu. Notice that γpGq ď νpGq (the minimum number of crossings in any plane drawing of G), because we can eliminate a crossing by adding a bridge. However, the inequality can be quite sharp. In fact, not only does K5 have genus 1, but so does K7 (whose crossing number is 9)! (See p. 267 for the figure.) There’s an analogue of Euler’s formula for graphs embedded on surfaces of higher genus. However, we have to be careful to talk only about embeddings in which the faces are 2-cells, i.e., homeomorphic (topologically equivalent) to R2. For example, consider the following two embeddings of K4 on S1. 3 1 2 2 4 3 1 4 The embedding on the left is a 2-cell embedding, because gluing the sides of the dashed square together joins the four green shaded triangles into a diamond, as shown. The embedding on the right is not a 2-cell embedding. The “outside” face (shown in yellow) is not simply connected; i.e., there are closed curves that cannot be contracted to a point, such as the circle formed by the top face of the dashed square (shown in yellow below). Proposition 6.37 (Euler’s formula for tori). If G has a 2-cell embedding on a surface of genus g, then n ´ e f “ 2 ´ 2g. Notice that g doesn’t have to equal the genus of G for there to be such an embedding and for this to work, merely be at least the genus. 90 For instance, this says that a toroidal embedding of K4 ought to have f “ 2 ´ 2g ´ n e “ 2 ´ 2 ´ 4 6 “ 2 faces. Corollary 6.38. Any graph G of genus g satisfies e ď 3pn ´ 2 2gq. 6.8. Heawood’s Theorem. If we can find a number c such that every genus-γ graph has a vertex of degree ă c, then it will follow that every genus-γ graph has χpGq ď c. It suffices to find c such that every genus-γ graph has average degree c. Theorem 6.39 (Heawood 1890). If G is embeddable on a surface of genus γ ą 0, then χpGq ď tcu where c “ 7 ?1 48γ 2 . Proof. It suffices to prove that G has a vertex of degree ď c ´ 1; the desired result will then follow from induction on n. There is no problem if n ď c, so we assume that n ą c. The quantity c is a positive root of the polynomial c2 ´ 7c p12 ´ 12gq “ 0, which is equivalent to c ´ 1 “ 6 ´ 12´12g c . Therefore, 2e n ď 6pn ´ 2 2gq n (by Corollary 6.38) “ 6 ´ 12 ´ 12g n ă 6 ´ 12 ´ 12g c “ c ´ 1. So the average degree of G is less than c, which means that at least one vertex must have degree ď c ´ 1 as desired. □ Note that c “ 4 for γ “ 0. On the other hand, the argument isn’t valid unless γ ą 0. It evaluates to 7 for γ “ 1; that is, every toroidal graph is 7-colorable. In fact, Heawood’s bound is sharp for γ ą 0; this is quite nontrivial but can be proven more easily than the Four-Color Theorem. So, strangely, the problem of determining the maximum chromatic number of genus-γ graphs is most difficult when γ “ 0. Wagner’s Theorem can be generalized for the torus, in the following sense: Theorem 6.40. For every n ě 0, there is some finite set Φn of (isomorphism types of) graphs such that γpGq ď n ð ñ G has no minor in Φn. For n “ 0, we have Φ0 “ tK5, K3,3u. Lots and lots of elements of Φ1 are known, but not the complete list. How do we even know that the set is finite? By the following amazing result, due to Robertson and Seymour: Graph Minor Theorem: In every infinite list of graphs, some graph is a minor of another. It follows from the GMT that every list of minimal obstructions must be finite, since no two elements of it are comparable. 91 7. The Tutte Polynomial We have seen lots of invariants of graphs that satisfy a deletion-contraction recurrence, including: τpGq “ τpG ´ eq τpG{eq (number of spanning trees) apGq “ apG ´ eq apG{eq (number of acyclic orientations) pGpkq “ pG´epkq pG{epkq (chromatic polynomial) We’d like to put all these invariants under one roof. 7.1. Definitions and examples. Definition 7.1. Let G be a graph. The Tutte polynomial TpGq “ TGpx, yq “ TpG; x, yq is defined by the recurrence (26) TpG; x, yq “ $ ’ ’ ’ ’ ’ & ’ ’ ’ ’ ’ % 1 if EpGq “ H, (a) x ¨ TpG{eq if e is a bridge, (b) y ¨ TpG ´ eq if e is a loop, (c) TpG ´ eq TpG{eq if e is an ordinary edge (neither bridge nor loop). (d) (Note that we are no longer working with simple graphs — we do want to keep track of loops and parallel edges.) There is a big problem with this definition: it is not clear that the polynomial TpG; x, yq is independent of the choice of edge e. We’ll do so soon, but first, some examples. Example 7.2. Let F “ Fr be a forest with r edges (any forest). Then T pFrq “ xr. For r “ 0, this is case (a) of the recurrence (26). Otherwise, every edge of F is a bridge, and F{e is a forest with r ´ 1 edges. By induction, TpF{eq “ xr´1, so TpFq “ xr by case (b). Example 7.3. Let Lr be the graph with one vertex and r loops. A similar argument, using case (c) of (26), gives T pLrq “ yr. More generally, any graph with r edges, all of which are loops, has Tutte polynomial yr. Example 7.4. How about K3? Let e be any edge (it doesn’t matter which one). Then case (d) of (26) gives TpK3q “ TpK3 ´ eq TpK3{eq “ TpP3q TpC2q. Now P3 is a tree with two edges, so its Tutte polynomial is x2. For the digon C2, let f be either edge. Then TpC2q “ TpC2 ´ fq TpC2{fq “ TpK2q TpL1q “ x y so T pK3q “ x2 x y. Example 7.5. We will compute the Tutte polynomial of the following graph G: e f 92 Applying the recurrence with the edge e gives f e T T A T B while applying the recurrence with f gives T T T f e There seems to be no particular reason why these two calculations ought to yield the same answer. But they do. First, TpAq “ x ¨ TpC2q “ xpx yq, TpBq “ TpC2q TpL2q “ px yq py2q, so using e gives TpGq “ x2 xy y2 x y. On the other hand, TpY q “ TpK3q “ x2 x y, TpZq “ y ¨ TpC2q “ ypx yq, so we get the same answer for TpGq using f. Example 7.6. One more calculation: TpK4q. Of course, it doesn’t matter which edge we start with; in the following diagram, we use the one indicated in red. T T W e T f X So we now have to calculate the Tutte polynomials of W and X. Apply the recurrence (26) with the labeled edge e P EpWq: T(W) W’ T T W" Now TpW 1q “ x ¨ TpK3q, and W 2 is the graph G of Example 4. On the other hand, applying the recurrence to f P EpXq gives 93 T(X) T X’ T X" So X1 – G, and TpX2q “ y ¨ TpBq, with B as in Example 4. Putting it all together: TpK4q “ TpWq TpXq “ TpW 1q TpW 2q TpX1q TpX2q “ xpx2 x yq 2px2 xy y2 x yq ypx y y2q “ px3 y3q p3x2 4xy 3y2q p2x 2yq. There are some interesting things about this polynomial. First, it is symmetric in x and y—that is, if we swap x and y, the polynomial is unchanged. Second, if we plug in various values of x and y, the numbers that come out are rather suggestive: TpK4; 0, 0q “ 0, TpK4; 0, 1q “ 6, TpK4; 1, 1q “ 16, TpK4; 0, 2q “ 24, TpK4; 1, 2q “ 38, TpK4; 2, 2q “ 64. It’s not entirely clear what all this means, but 24 “ 4! is the number of acyclic orientations of K4 and 16 is the number of spanning trees, among other things. (These are not coincidences!) In order to prove that the Tutte polynomial is well-defined by the recurrence (26), we will give an explicit formula for TpG; x, yq. Definition 7.7. The rank rGpAq “ rpAq of an edge set A Ď EpGq is defined to be (27) rpAq “ maxt|X| : X Ď A is acyclicu. The rank of G itself, denoted rpGq, is just the rank of its edge set. This is the size of a maximal spanning forest in G, namely npGq ´ cpGq. In addition, we define the corank and nullity of A by corkpAq “ corkGpAq “ rpGq ´ rpAq, nullpAq “ nullGpAq “ |A| ´ rpAq. Equivalently, the corank is the minimum number of edges that need to be added to A in order to increase its rank to rpGq (that is, to make it contain a maximal forest), and the nullity is the minimum number of edges that need to be deleted from A in order to make it acyclic. We can now state the fundamental theorem about the Tutte polynomial. Theorem 7.8. For every graph G “ pV, Eq, we have TpG; x, yq “ ÿ AĎE px ´ 1qcorkpAqpy ´ 1qnullpAq. For obvious reasons, this formula is referred to as the corank-nullity form of the Tutte polynomial. Before giving the proof, we verify this formula for the graph G “ K3. Here rpGq “ 2. Note that rpAq “ minp|A|, 2q for all A Ď E. Theorem 7.8 lets us calculate the Tutte polynomial as follows: 94 |A| # of sets A Ď E rpAq Contribution to TpG; x, yq 0 1 0 px ´ 1q2py ´ 1q0 = x2 ´2x1 1 3 1 3px ´ 1q1py ´ 1q0 = 3x ´3 2 3 2 3px ´ 1q0py ´ 1q0 = 3 3 1 2 px ´ 1q0py ´ 1q1 = y ´1 x2x y which agrees with the calculation in Example 3. Proof of Theorem 7.8. Define ˜ TpGq “ ÿ AĎE px ´ 1qrpGq´rpAqpy ´ 1q|A|´rpAq. We need to show that ˜ TpGq “ TpGq for all graphs G “ pV, Eq. We will do this by induction on |E|. In the base case E “ H, we have rpHq “ |H| “ 0, so the theorem says that ˜ TpGq “ px ´ 1q0py ´ 1q0 “ 1 “ TpGq (by case (a) of (26)). For the inductive step, suppose that TpG1q “ ˜ TpG1q for all graphs G1 with epG1q ă epGq. In particular, if we choose e P E arbitrarily, then Theorem 1 holds for G ´ e and (provided that e is not a loop) for G{e. First, suppose that e is a loop. Let G1 “ G ´ e. Then rpG1q “ rpGq, and for A Ď E, rGpAq “ rGpA ´ eq “ rG1pA ´ eq. Therefore, ˜ TpGq “ ÿ AĎE px ´ 1qcorkpAqpy ´ 1qnullpAq “ ÿ AĎE eRA px ´ 1qcorkpAqpy ´ 1qnullpAq ÿ AĎE ePA px ´ 1qcorkpAqpy ´ 1qnullpAq “ ÿ AĎEzteu px ´ 1qcorkG1pAqpy ´ 1qnullG1pAq ÿ A1ĎEzteu px ´ 1qcorkG1pAqpy ´ 1qnullG1pAq1 (think of A1 as Azteu, and observe that nullpA1q “ nullpAq ´ 1) “ ÿ AĎEzteu px ´ 1qcorkG1pAqpy ´ 1qnullG1pAq py ´ 1q ÿ A1ĎEzteu px ´ 1qcorkG1pAqpy ´ 1qnullG1pAq “ p1 y ´ 1q ÿ AĎEzteu px ´ 1qcorkG1pAqpy ´ 1qnullG1pAq “ y ¨ ˜ TpG1q “ y ¨ TpG ´ eq (by the definition of ˜ T, and then by induction). Second, if e is a bridge, then ˜ TpGq “ x ¨ TpG{eq. The proof is similar (now it’s the corank that changes instead of the nullity) and is left as an exercise. Third, suppose that e is neither a bridge nor a loop. Then rpGq “ rpG ´ eq “ rpG{eq 1, and for A Ď E, rGpAq “ # rG´epAq if e R A, rG{epA ´ eq 1 if e P A. So we can calculate TpGq as 95 ÿ AĎE eRA px ´ 1qrpGq´rpAqpy ´ 1q|A|´rpAq ÿ AĎE ePA px ´ 1qrpGq´rpAqpy ´ 1q|A|´rpAq “ ÿ A: eRA px ´ 1qrpG´eq´rG´epAqpy ´ 1q|A|´rG´epAq ÿ A: ePA px ´ 1qrpG{eq´rG{epA´eqpy ´ 1q|A´e|´rG{epA´eq “ ÿ AĎE´teu px ´ 1qrpG´eq´rG´epAqpy ´ 1q|A|´rG´epAq ÿ AĎE´teu px ´ 1qrpG{eq´rG{epA´eqpy ´ 1q|A´e|´rG{epA´eq “ ˜ TpG ´ eq ˜ TpG{eq which agrees with case (d) of (26). □ Corollary 7.9. The choice of edge does not matter when computing the Tutte polynomial by deletion-contraction. Moreover, TpG; x, yq has nonnegative integer coefficients (for short, TpG; x, yq P Nrx, ys). Now that we have it, what do we do with it? Well, all the other deletion-contraction invariants that we know about can be obtained as specializations of the Tutte polynomial—that is, by setting the parameters x and y to other values. Theorem 7.10. τpGq “ TpG; 1, 1q. For example, let G be as in Example 4. Here G is connected, so “maximal spanning forest” is the same thing as “spanning tree”. We calculated TpG; x, yq “ x2 xy y2 x y, so TpG; 1, 1q “ 5. Indeed, τpGq “ 5. First proof of Theorem 7.10. Plugging x “ y “ 1 into (26), we find that TpG; 1, 1q “ $ ’ ’ ’ & ’ ’ ’ % 1 if EpGq “ H, TpG{e; 1, 1q if e is a bridge, TpG ´ e; 1, 1q if e is a loop, TpG ´ e; 1, 1q TpG{e; 1, 1q otherwise. This is precisely the recurrence defining τpGq. □ Second proof of Theorem 7.10. Plug x “ y “ 1 into the explicit formula of Theorem 7.8. It looks as though this will kill every term, but actually some of the terms—namely, those with both rpGq ´ rpAq “ 0 and |A| ´ rpAq “ 0—are identically 1, and will be unaffected by the substitution x “ y “ 1. Every other term will indeed be killed. Therefore (28) TpG; 1, 1q “ #tA Ď E | rpGq “ rpAq, |A| “ rpAqu. But rpAq “ rpGq if and only if A is connected (as a spanning subgraph of G) and rpAq “ |A| if and only A is acyclic. Therefore, the edge sets A counted in (28) are precisely the spanning forests of G. □ 96 Some more specializations that come from the corank-nullity expansion: TpG; 2, 1q “ number of acyclic subsets of EpGq, TpG; 1, 2q “ number of maximum-rank subsets of EpGq (= spanning subgraphs with c “ cpGq), TpG; 2, 2q “ 2epGq. Another very useful fact is the following. Proposition 7.11. (1) If G1, . . . , Gn are the connected components of G, then TpGq “ TpG1q ¨ ¨ ¨ TpGnq. (2) If G1 and G2 are graphs on disjoint vertex sets and G is obtained by identifying a vertex of G1 with a vertex of G2, then TpGq “ TpG1qTpG2q. These results are immediate from the corank/nullity generating function. Another way to phrase them is that TpGq is the product of the Tutte polynomials on its blocks. (A block of G is either a maximal 2-connected subgraph or a cut-edge; if every graph has a unique decomposition into blocks and each edge belongs to exactly one block.) 7.2. The chromatic polynomial from the Tutte polynomial. Every invariant defined by a linear deletion-contraction recurrence can be obtained from the Tutte polynomial. In particular, we’ve seen that the number spGq of strong orientations, and the number apGq of acyclic orientations satisfy such recurrences. In fact spGq “ TpG; 0, 2q and apGq “ TpG; 2, 0q. The proofs of these facts are similar to the first proof of Theorem 2. In fact, there’s a universal recipe for obtaining any deletion-contraction invariant as a Tutte polynomial evaluation (see Bollobas). Sometimes the evaluation requires a correction factor for the number of vertices or components of G (data that the Tutte polynomial does not keep track of). A basic example is the chromatic polynomial pGpkq. In order to obtain pGpkq from TpG; x, yq, we could proceed purely algebraically, by figuring out what to plug into the Tutte recurrence to recover how the chromatic recurrence (Theorem 5.8). However, I think the following approach is more enlightening. For the purpose of the following discussion, the term “k-coloring” is going to mean any function f : V pGq Ñ rks. A coloring f is proper at an edge xy P G if fpxq ‰ fpyq, and it is proper on G if it is proper at every edge. Proposition 7.12 (Whitney’s formula). Let G be a graph with n vertices and rank function r. Then pGpkq “ ÿ AĎEpGq p´1q|A|kn´rpAq. Proof. This is an application of inclusion-exclusion. If we want to count proper k-colorings, we start by counting all colorings of G. Then, for each edge, subtract the number of colorings that are improper on that edge. We have undercounted the colorings that are improper on two edges; this accounts for the summands with |A| “ 2. But then we have overcounted all the colorings that are improper on three edges. . . and so on. This means that pGpkq “ ÿ AĎEpGq p´1q|A|#tcolorings that are improper on (at least) Au. Being improper on A is equivalent to being constant on each component of the subgraph of G induced by A. The number of components is n ´ rpAq, so there are kn´rpAq such colorings. □ 97 Whitney’s formula has the same general form as the corank/nullity form of the Tutte polynomial — it is a generating function of edge sets of G, with summands that involve rank and cardinality. To connect them, we do a little algebra: pGpkq “ kn ÿ AĎEpGq p´1q|A|k´rpAq “ kn ÿ AĎEpGq p´1q|A|´rpAqp´kq´rpAq “ p´kq´rpEqkn ÿ AĎEpGq p´1q|A|´rpAqp´kqrpEq´rpAq “ p´1q´rpEqkn´rpEq ÿ AĎEpGq p´1q|A|´rpAqp´kqrpEq´rpAq “ p´1qnpGq´cpGqkcpGq ÿ AĎEpGq py ´ 1q|A|´rpAqpx ´ 1qrpEq´rpAqˇ ˇ ˇ x´1“´k, y´1“´1 “ p´1qnpGq´cpGqkcpGqTpG; 1 ´ k, 0q. I’ll state that as a theorem for ease of reference: Theorem 7.13. Let G be a graph with n vertices and c components. Then pGpkq “ p´1qn´ckcTpG; 1 ´ k, 0q. Corollary 7.14. The number of acyclic orientations of any graph G is apGq “ TpG; 2, 0q. Proof. Recall that apGq “ |pGp´1q|. Plug k “ ´1 into both sides of Theorem ??. (Taking absolute values is unnecessary in this formula because TpG; 2, 0q is always nonnegative.) □ These results should make you wonder what the dual of a coloring is — i.e., what combinatorial object has a generating function equal to TpG; 0, 1 ´ kq (plus appropriate correction factors)? Definition 7.15. Let G “ pV, Eq be a graph with oriented edge set D, and let A be a finite abelian group, written additively and with identity element 0. An nowhere-zero A-flow is a function φ : D Ñ Azt0u such that the net flow into every vertex equals the net flow out; that is, for every v P V pGq @v P V : ÿ e“ # » uvPD φpeq “ ÿ e“ # » vwPD φpeq. The choice of orientation does not affect the number of nowhere-zero A-flows (reversing an edge e corresponds to replacing φpeq with ´φpeq), so this number depends only on G and A. Observe that if A has a cut-edge then there are no nowhere-zero A-flows (do you see why?) Dually, if A has a loop e then φpeq can take on any of |A| ´ 1 values with no constraints. Theorem 7.16. Let k “ |A|. Then the number of nowhere-zero A-flows on G is p˚ Gpkq “ p´1qe´ncTpG; 0, 1 ´ kq. We omit the proof (exercise?) In particular, this quantity is independent of the group structure of A and depends only on its cardinality, which is certainly not obvious from the definition. Accordingly, we can speak of k-flows rather than A-flows. The function p˚ Gpkq is called the flow polynomial of G. Theorems 7.13 and 7.16 together say that if G is connected and planar, then pG˚pkq “ kp˚ Gpkq. 98 A famous unsolved problem in graph theory is Tutte’s 5-Flow Conjecture, which asserts that every 2-edge-connected graph G has at least one nowhere-zero 5-flow, i.e., p˚ Gp5q ą 0. Jaeger proved the statement for 8-flows in 1976 and Seymour proved it for 6-flows in 1981; this is still the state of the art. 7.3. Edge activities. The coefficients of TGpx, yq are all nonnegative integers. What do they count? In other words, there should be some set M (“mysteries”) and some combinatorially defined functions i : M Ñ N and e : M Ñ N such that TGpx, yq “ ÿ mPM xipMqyepMq “ ÿ j,kPN #tM P M | ipMq “ j, epMq “ ku xjyk. If we plug x “ y “ 1 into this formula, we get TGp1, 1q “ |M|, which suggests that the M’s should be the maximal forests in G. What are the functions i and e? Well. . . Assume WLOG that G is connected. Order the edges E “ EpGq as e1, e2, . . . , es, and let T be a spanning tree. Definition 7.17. An edge ei P T is internally active with respect to T if it is the smallest edge of the cut between the two components of T ´ ei. Equivalently, ej R T, T ´ ei ej a tree ù ñ j ě i. Meanwhile, an edge ej P E ´T is externally active with respect to T if it is the smallest edge of the unique cycle in T ej. Equivalently, ei P T, T ´ ei ej a tree ù ñ i ě j. Let apTq be the number of internally active edges of T (its “internal activity”) and let bpTq be the number of externally active edges (its “external activity”). Call the ordered pair papTq, bpTqq the biactivity of T. Theorem 7.18. For all G and all orderings of EpGq, we have TpG; x, yq “ ÿ T xapT qybpT q. That is, the coefficient tij of xiyj in TpG; x, yq is the number of spanning trees of G with biactivity pi, jq The proof, again, involves verifying the recurrence. For a particular spanning tree T, the numbers apTq, bpTq of course depend on the choice of ordering. So the collection of trees with given biactivity is not an isomorphism invariant. However, the number tij of such trees does not depend on the ordering. We will see the Tutte polynomial again shortly, in the context of network reliability. Other applications of the Tutte polynomial outside pure graph theory include invariants in knot theory (Kauffman bracket, Jones polynomial) and statistical mechanics (the Potts model). The Tutte polynomial is actually an invariant not just of graphs, but of matroids, where its applications include weight enumerators of linear codes (Greene); counting chambers in a hyperplane arrangement (Zaslavsky); and much more. 99 8. Probabilistic Methods in Graph Theory 8.1. A very brief taste of Ramsey theory. Theorem 8.1 (Ramsey’s Theorem). Let p, q be positive integers. Then there is an integer n such that every graph G of order at least n has either a Kp or a Kq as a subgraph. Traditionally, the result is phrased a little differently: for every p, q, there is some N such that whenever n ě N and the edges of Kn are 2-colored red and blue, the result contains either a red Kp or a blue Kq. (Of course, the graph G referred to in the theorem is just the spanning subgraph with the red edges.) Proof. We proceed by double induction on the pair pp, qq. If p “ 1, then we can take n “ q. Likewise if q “ 1 then we can take n “ p. Now, suppose the claim is true for all pairs pp1, q1q such that p1 ď p, q1 ď q, and pp1, q1q ‰ pp, qq, and let Rpp1, q1q be the smallest number such that all graphs of order ě Rpp1, q1q have either a Kp or a Kq. Let n ě Rpp, q ´ 1q Rpp ´ 1, qq. Color the edges of Kn red and blue. Fix a vertex x P G and let A “ ty P V pGq | xy is colored redu, B “ ty P V pGq | xy is colored blueu. Then |A| |B| “ n ´ 1, so either |A| ě Rpp ´ 1, qq or |B| ě Rpp, q ´ 1q (both inequalities can’t be false). WLOG, suppose the former. If A contains a blue Kq then there is nothing to prove, while if A1 Ď A is a red Kp´1 then A1 Y txu is a red Kp. □ The theorem shows that Rpp, qq (called a Ramsey number) is well-defined for all positive integers p, q, and gives the recursive upper bound Rpp, qq ď Rpp ´ 1, qq Rpp, q ´ 1q. This bound, however, is far from sharp, and the exact value of (most) Ramsey numbers is unknown. Ramsey’s theorem admits numerous generalizations. For example, we can use more than two colors: Theorem 8.2. Let p1, . . . , pℓbe positive integers. Then, for any sufficiently large n, whenever the edges of Kn are colored with ℓcolors, there is some monochromatic clique of size i and color i. The proof is analogous to that of Theorem 8.1. There are also natural generalizations to hypergraphs and even to infinite graphs! What about trying to find a lower bound for Ramsey numbers? Hold that thought. 8.2. The reliability polynomial. Suppose that we have a connected graph G “ pV, Eq in which the edges might disappear. (Think of a power grid in which there is a chance that some lines might go down, or a city during a snowstorm which might block access to some roads.) We would like the network to remain connected. How can we measure the reliability of the network, i.e., the probability that it remains connected? In order to obtain a model we can work with, we will assume that every edge has probability p of staying alive p0 ď p ď 1), and that the edges are all independent. (In the language of probability, the edges correspond to a family of IID random variables; IID stands for “independent and identically distributed”.) Therefore, the probability that any given A Ď E is exactly the set of surviving edges is p|A|p1 ´ pq|E|´|A|, and the probability that the graph remains connected is RGppq “ ÿ AĎE A connected p|A|p1 ´ pq|E|´|A| “ p1 ´ pq|E| ÿ A ˆ p 1 ´ p ˙|A| . 100 In fact, we can recover this quantity from the Tutte polynomial. Being connected is equivalent to having corank 0, and we can kill offall the positive-corank edge sets by setting x “ 1 in Theorem 7.8: TpG; 1, yq “ ÿ A connected py ´ 1q|A|´rpAq “ py ´ 1q´rpEq ÿ A py ´ 1q|A| since cork A “ 0 also means rpAq “ rpEq. Set y ´ 1 “ p{p1 ´ pq, i.e., y “ p{p1 ´ pq 1 “ 1{p1 ´ pq, which gives TpG; 1, 1{p1 ´ pqq “ ˆ p 1 ´ p ˙´rpEq ÿ A ˆ p 1 ´ p ˙|A| “ ˆ p 1 ´ p ˙´rpEq p1 ´ pq´|E|RGppq or equivalently (29) RGppq “ ˆ p 1 ´ p ˙rpEq p1 ´ pq|E|TpG; 1, 1{p1 ´ pqq “ prpGqp1 ´ pqnull GTpG; 1, 1{p1 ´ pqq. In particular, RGppq is a polynomial function of p, called the reliability polynomial. This is great for theory, but it is not practical for large graphs because computing the reliability polynomial is exponentially hard. Also, we might want to measure reliability for graphs for which we have messy or incomplete data. Fortunately, we can use the random graph model described above in another way. 8.3. Random graphs. Suppose that G is a graph with a billion vertices and a trillion edges. What is the probability that G. . . ‚ is connected? ‚ has an isolated vertex? ‚ is 4-colorable? ‚ is 2-connected? Applications include studying large networks (e.g., the Internet, Facebook) in which it is impossible or infeasible to obtain a complete list of vertices, but which we may be able to model locally. For example, suppose we want to solve the six-degrees-of-separation problem. Construct a graph G in which the vertices are the n people in the world and edges indicate that you have met at least once (however you choose to define “met”) . What is the probability that any two people have met? We’ll make the assumption that the probabilities are independent for any two pairs of people (not really true, but not totally silly, and probably necessary if you want to be able to calculate anything at all). We’ll also assume that the number of acquaintances of any single person is about constant, and doesn’t depend on n. (The number of people I meet each day is affected very little, or not at all, by, say birth rates in a country I have never visited.) Therefore, Prrxy P EpGqs „ 1 n. Now we can model this as a graph problem. Let G be a graph on n vertices in which each of then 2 ˘ possible edges has probability k n of occurring, where k is a constant. Can we then calculate lim nÑ8 PrrG has diameter ď 6s? Of course, we could replace “diameter ď 6” with many other graph properties of interest. We could also ask what happens if we change the assumption about how p behaves as n Ñ 8. In general, p will tend to 0, 101 but how fast it does so — like 1{n? like 1{n2? like 1{ ln n? — can change the answers to these questions dramatically. Let’s get precise. Definition 8.3. A discrete probability space or probability model pS, Prq is a finite set S together with a map Pr : S Ñ r0, 1s such that ř sPS Prpsq “ 1. A subset of S is called an event. The probability of an event A is PrpAq “ ÿ sPS Prpsq. Two events A, B are independent if PrpA X Bq “ PrpAq PrpBq. We are studying the probability model whose underlying set S is the set of simple graphs on n labeled vertices. In particular, |S| “ 2p n 2q. By saying that each edge occurs with independent probability p, we are saying that for G P S, PrpGq “ pepGqp1 ´ pqp n 2q´epGq. This setup is called the Erd˝ os–R´ enyi random graph model, denoted Gpn, pq. There are other random graph models that are interesting, but this is the most classical and best-studied one and the one we will concentrate on exclusively. Be warned that it is common to talk about “the graph Gpn, pq” as though it were a specific graph, rather than a probability model. For example, “the probability that Gpn, pq is connected” really means “the value PrrAs, where A is the set of connected graphs on n vertices under the probability model Gpn, pq”. Proposition 8.4. If p ą 0 is constant, then Gpn, pq is connected almost always. The first step is to define what “almost all” means. Specifically, let qn be the probability that Gpn, pq is connected. Then to say that almost all graphs are connected is to say that lim nÑ8 qn “ 1. So, let’s calculate qn. The game plan is going to start with combinatorics, then continue with analysis and probability. Proof. Recall that a graph G “ pV, Eq is connected if and only if E X rA, ¯ As ‰ H for every H Ĺ A Ĺ V . We can calculate the probability of that happening exactly, since |rA, ¯ As| “ |A|| ¯ A|. Therefore, 1 ´ qn “ Pr ˜ď A G X rA, ¯ As “ H ¸ ď ÿ A Pr G X rA, ¯ As “ H ˘ “ 1 2 n´1 ÿ k“1 ˆn k ˙ p1 ´ pqkpn´kq (where k “ |A|, and the 1{2 factor arises because rA, ¯ As “ r ¯ A, As) ă t n´1 2 u ÿ k“1 nkp1 ´ pqkpn´kq (since the sum is a palindrome andn k ˘ ă nk) ă t n´1 2 u ÿ k“1 nkp1 ´ pqkn{2 ă 8 ÿ k“1 pnp1 ´ pqn{2qk “ x 1 ´ x, 102 where x “ np1´pqn{2. On the other hand, limnÑ8 x “ 0 because exponential decay kills polynomial growth (check it with L’Hˆ opital’s Rule if you wish). It follows that limnÑ8 1 ´ qn “ 0, i.e., limnÑ8 qn “ 1. □ Notice that we used some inequalities that are not close to sharp, like n k ˘ ă nk. That’s fine if all we care about is to show that qn Ñ 1, but if you want more precise information on how fast in converges to 1 then you would want to tighten up those inequalities. 103 8.4. Back to Ramsey theory. Consider the Erd˝ os-R´ enyi random graph Gpn, 1{2q. For each set A of k vertices, let EA be the event that A is either a clique or a coclique. In particular, PrrEAs ă 1 if and only if there exists a graph on n vertices with no k-clique or k-coclique — i.e., if Rpk, kq ą n. This is a simple observation, but a very powerful one, because it means that we can prove combinatorial theorems using probabilistic methods. This was one of Erd˝ os’s main contributions to combinatorial theory. Check this out. Let E be the event that Gpn, 1{2q contains either a k-clique or a k-coclique. Then E “ Ť A EA (with the sum over all A Ď rns of size k) and PrrEAs “ 21´p k 2q for any A. Therefore, PrrEs “ Pr «ď A EA ff ď ÿ A PrrEAs “ ˆn k ˙ 21´p k 2q. That is, ifn k ˘ 21´p k 2q ă 1 then Rpk, kq ą n. This implies the following result: Theorem 8.5. (Erd˝ os 1947) Rpk, kq ą 2k{2. Proof. For k “ 3, we have Rp3, 3q “ 6 ą 23{2, so suppose k ě 4. Then k! ą 2k, so ˆn k ˙ “ npn ´ 1q ¨ ¨ ¨ pn ´ k ´ 1q k! ă nk 2k . (This is a tighter inequality than the crude n k ˘ ă nk we used earlier, although it is still pretty crude.) Therefore, if n ă 2k{2 then ˆn k ˙ 21´p k 2q ă nk 2k 21´p k 2q ă 2k2{2´k1´p k 2q “ 2´k{21 ă 1 and we are done by the previous discussion. □ Using Stirling’s approximation ? 2πnpn{eqn ď n! ď en{12? 2πnpn{eqn to approximate the binomial coefficients results in a tighter bound, which we state without proof: Proposition 8.6. Rpk, kq ě pe ? 2q´1k2k{2 for all k ě 3. 8.5. Random variables, expectation and Markov’s inequality. Definition 8.7. A [discrete] random variable is a function X : S Ñ R, where pS, Prq is a finite probability space. Any graph invariant (n, e, α, χ, . . . ) can be regarded as a random variable on Gpn, pq. So can graph properties; remember that a property is the same thing as a set of graphs. The indicator variable of a graph property P Ď Gpn, pq is XP pGq “ # 1 if G P P, 0 if G R P. The expectation or expected value or mean X is µ “ EpXq “ ÿ GPS PrpGqXpGq. The expectation can be regarded as the average value of X over the probability space. 104 As an example, let Xk be the number of k-cycles can we expect to find in Gpn, pq. Let XC be the indicator variable for a particular k-cycle C, so that Xk “ ř C XC. Then (30) EpXkq “ k! 2k ˆn k ˙ pk “ npn ´ 1q ¨ ¨ ¨ pn ´ k 1q 2k pk. since there are n k ˘ choices for the vertices in the cycle and k!{2k ways to order them cyclically, and each cycle occurs with probability pk. Note that even though the variables XC are not pairwise independent, the expectation of their sum is nevertheless the sum of their expectations — the principle of linearity of expectation. If Yk was instead the number of induced k-cycles, then similarly EpYkq “ npn ´ 1q ¨ ¨ ¨ pn ´ k 1q 2k pkp1 ´ pqp k 2q´k. It is typically easier to compute expectations than probabilities, so a common technique is to try to replace probabilities by expressions involving expectations, using inequalities such as the following. Proposition 8.8 (Markov’s Inequality). If X : S Ñ Ně0 is a random variable, then PrpX ě nq ď EpXq{n. Proof. EpXq “ ÿ kě0 k ¨ PrpX “ kq ě ÿ kěn k ¨ PrpX “ kq ě ÿ kěn n ¨ PrpX “ kq “ n ¨ PrpX ě kq. □ The proposition can easily be extended to real-valued discrete random variables (and to random variables on infinite probability spaces, though we won’t need that here). A useful consequence is the following: if tX1, X2, . . . u is a sequence of random variables, then (31) lim nÑ8 EpXnq “ 0 ù ñ lim nÑ8 PrpXn “ 0q “ 1. 8.6. Graphs with high girth and chromatic number. Here is a famous application of random graph theory: Theorem 8.9 (Erd˝ os 1959). For any k P N, there exists a graph with girth g ą k and chromatic number χ ą k. In general, graphs with high girth have few edges, while graphs with high chromatic number have lots of edges. Erd˝ os’s proof begins with the hope that the probability p in the Erd˝ os-R´ enyi model can be chosen carefully so that most graphs with m edges have high chromatic number, yet small enough that most graphs with m edges have high girth. Say that a cycle is short if its length is ď k and long otherwise, so that a graph has girth ą k if and only if it has no short cycles. Also, call a coclique big if it has at least npGq{k vertices. Note that if G has chromatic number ď k, then at least one color class in an optimal coloring is big. Therefore, if G has no big cocliques then its chromatic number is ą k. Thus, we are looking for a graph with no short cycles and no big cocliques. In order to make p small enough to rule out the possibility of short cycles, it turns out that we have to make p ă 1{n (more precisely, we have to require that pn Ñ 0 as n Ñ 8). Unfortunately, this constraint on p rules out the probability of any cycles at all (exercise) — which means that G will almost surely be acyclic, hence bipartite. So instead we set p “ nε´1 for some small positive ε. This will make big cocliques unlikely, and while it will produce short cycles in G, if ε is small enough then it will not produce very many of them. We can then delete a vertex in each short cycle to produce a graph with no short cycles and no big cocliques — i.e., large girth and large chromatic number. 105 First observe that for any integer r, we have (32) Prpα ě rq ď ÿ UĎrns: |U|“r PrpU is a cocliqueq “ ˆn r ˙ p1 ´ pqp r 2q. Proposition 8.10. Let k ě 0 be an integer and let p “ ppnq depend on n such that p ě p6k ln nq{n for n " 0. Then lim nÑ8 Pr ´ α ě n 2k ¯ “ 0. In other words, if p remains above a certain threshold, then almost always Gpn, pq will not contain any small cocliques. (Why that particular bound for p? Erd˝ os certainly didn’t pull this out of thin air; he did the calculation first and then reverse-engineered the bound to work.) Proof. Let r “ rn{2ks. By (32) we have Pr rα ě rs ď ˆn r ˙ p1 ´ pqp r 2q ď nrp1 ´ pqp r 2q (crude but valid) “ pnp1 ´ pqpr´1q{2qr ď pne´pr{2p{2qr (since 1 ´ p ď e´p for all p) ď pne´3 ln n{2p{2qr (since ´pr ď ´p6k ln n{nqpn{2kq “ ´3 ln n) “ pn n´3{2 ep{2qr “ pn´1{2 ep{2qr which vanishes as n Ñ 8. □ Proof of Theorem 8.9. The graph K3 has g “ χ “ 3, so suppose k ě 3. Fix ε with 0 ă ε ă 1{k and let p “ nε´1. Let X denote the number of short cycles in Gpn, pq; recall that “short” means “of length at most k”. Then X “ X3 ¨ ¨ ¨ Xk in the notation of the previous section, so by (30) we have EpXq “ k ÿ r“3 npn ´ 1q ¨ ¨ ¨ pn ´ r 1q 2r pr “ 1 2 k ÿ r“3 nrpr “ 1 2 k ÿ r“3 nrε ď k ´ 2 2 nkε and now Markov’s inequality says that PrpX ě n{2q ď EpXq{pn{2q ď pk ´ 2qnkε´1. But ε ă 1{k implies kε ´ 1 ă 0, so we conclude that lim nÑ8 PrpX ě n{2q “ 0. So for n sufficiently large, (33) PrrX ě n{2s ă 1{2. Moreover, since p “ nε´1, it is indeed the case that p ě p6k ln nq{n for n sufficiently large. By Proposi-tion 8.10, we can choose n large enough so that (34) Prrα ě n{2ks ă 1{2. Therefore, by (33) and (34) together, PrrX ě n{2 or α ě n{ks ă 1{2 1{2 “ 1. 106 which implies that there must exist a graph G with XpGq ă n{2 and αpGq ă n{2k. Delete a vertex from each of the (fewer than n{2) short cycles of G to get a graph H. Then H has girth ą k, at least n{2 vertices, and αpHq ď αpGq, so χpHq ě |H| αpHq ě n{2 αpGq ą k. □ This is the revision frontier — everything hereafter comes with no guarantee of legibility, let alone correctness. 8.7. Threshold Functions. Suppose we have some graph property Q that is monotone, i.e., is preserved by adding more edges. The simplest example is the property of having an edge! So are connectedness, k-connectivity (for any constant k), having minimum degree d (for any constant d), and non-planarity. Examples of non-monotone graph properties include regularity and chordality. If p is a function of n, then how does QpG, pq :“ lim nÑ8 PrrGpn, pq has property Qs depend on ppnq? A theme of random graph theory is the appearance of thresholds in problems like this. For many monotone properties Q, there is some function t “ tpnq such that ‚ if p Ñ 0 faster than t Ñ 0, then QpG, pq “ 0; ‚ if p Ñ 0 slower than t Ñ 0, then QpG, pq “ 1. More precisely, QpG, pq “ $ & % 0 if lim nÑ8 p{t “ 0, 1 if lim nÑ8 p{t “ 8. Theorem 8.11. Let Q be the property of having no isolated vertices (i.e., having δ ě 1). Then tpnq “ ln n{n is a threshold function for Q. In other words, if p is any function that decays even a little bit faster than t (say p “ 1{n), then for large enough n, the probability that G will have an isolated vertex tends to 1. If p decays more slowly than t (say p “ ln n{n0.99), then for large enough n, the probability that G will have an isolated vertex tends to 0. First part of the proof. Let Xi be the event that vertex i is isolated and let X = number of isolated vertices. So (35) EpXq “ n ÿ i“1 EpXiq “ n ÿ i“1 p1 ´ pqn´1 “ np1 ´ pqn´1. Now we start messing around with this sum. First, (36) p1 ´ pqn´1 „ p1 ´ pqn “ en lnp1´pq “ enp´p´p2{2´p3{3´¨¨¨ q ă e´np (where „ means “has the same limit as n Ñ 8”; note that limnÑ8p1 ´ pq “ 1). So (37) EpXq ă ne´np „ ne´np. 107 Set p “ c ln n{n; then we get (38) EpXq ă ne´c ln n “ n1´c. If ppnq is above the threshold tpnq, then c Ñ 8 and n1´c Ñ 0. We’ve now proved that if ppnq{tpnq Ñ 8, then lim nÑ8 Epnumber of isolated verticesq “ 0. It follows (from Markov’s Inequality) that limnÑ8 PrrX “ 0s “ 1, i.e., G almost surely has no isolated vertices, i.e., δpGq ě 1 almost surely. The second part is to show that if ppnq{tpnq Ñ 0, then G almost surely does have an isolated vertex. Hold that thought; we’ll finish the proof later. □ Lots of properties exhibit this threshold behavior, so we can talk about the “evolution” of a random graph as ppnq increases. ‚ If n2p Ñ 0, then almost surely G has no edges. (This is very easy to prove.) ‚ If np Ñ 0 (or, if you prefer, p “ op1{nq) then Gn,p is almost surely acyclic. ‚ If np Ñ c for some c P p0, 1q, then Gn,p is almost surely a bunch of components, each of which has at most one cycle. np Ñ 1 is called the phase transition. ‚ If np Ñ c for some c ą 1, then Gn,p almost surely consists of a “giant component” containing lots of the vertices and a whole mess of small components, each containing at most one cycle. (To be more specific, the giant component has more than p1 ´ 1{cqn vertices, while the second largest component has only Oplog nq.) This explains the term “phase transition”: imagine the graph as a potful of water molecules being cooled. The cooler the water gets, the more likely any two water molecules are to bond with each other (i.e., p is increasing). In the narrow range around 32˝F, the graph becomes slushy and then solidifies suddenly into a big block of ice with some bits of slush lying around. ‚ If np Ñ 8, but np´log n Ñ ´8, then Gn,p almost surely consists of a “giant component” containing almost all the vertices and a few acyclic components. (The giant ice block is starting to swallow the bigger bits of slush, leaving only a few small drops of water here and there.) ‚ If np ´ log n Ñ 8, then Gn,p is almost surely connected. 8.8. Using the variance for lower thresholds. In order to show that tpnq is an upper threshold for some graph property — i.e., that the property almost surely doesn’t occur when p{t Ñ 0 — we need to bound probability from above, and the standard tool for this is Markov’s inequality. On the other side of the threshold, though, we need a way of bounding probabilities from below. Definition 8.12. Let X be a random variable with mean µ. The variance of X is σ2 “ VarpXq “ E pX ´ µq2˘ . 108 This is a measure of how much X deviates from its expected value. A useful formula for the variance is VarpXq “ E X2 ´ 2Xµ µ2˘ “ EpX2q ´ 2µEpXq µ2 “ EpX2q ´ µ2 “ EpX2q ´ EpXq2. (39) Lemma 8.13 (Chebyshev’s inequality). Let X be a random variable with mean µ. Then Prr|X ´ µ| ě λs ď VarpXq{λ2. Proof. By Markov’s inequality and the definition of variance, we have Prr|X ´ µ| ě λs “ PrrpX ´ µq2 ě λ2s ď E pX ´ µq2˘ {λ2 “ VarpXq{λ2. □ Proposition 8.14. Let X be a N-valued random variable on Gpn, pq with mean µ. If µ ą 0 for n sufficiently large and VarpXq{µ2 Ñ 0, then XpGq ě 1 for almost all G. Proof. If XpGq “ 0 then |XpGq ´ µ| “ µ. So PrrXpGq “ 0s ď Prr|XpGq ´ µ| “ µs ď Prr|XpGq ´ µ| ě µs ď σ2{µ2 Ý Ý Ý Ý Ý Ñ nÑ8 0. □ Corollary 8.15 (Second Moment13 Method). If EpX2q{EpXq2 Ñ 1 then PrpX “ 0q Ñ 0. Proof. The hypothesis and (39) imply that VarpXq µ2 “ EpX2q ´ µ2 µ2 “ EpX2q µ2 ´ 1 Ñ 1 ´ 1 “ 0. □ As an application, we complete the proof of Theorem 8.11, which states that Second part of proof. Let t “ ln n{n. We wish to show that if p{t Ñ 8 then G almost surely has no isolated vertex. As before, let Xi be the indicator variable for the event that i is isolated and let X “ ř Xi. We have already calculated EpXq “ np1 ´ pqn´1, so EpXq2 “ n2p1 ´ pq2n´2. Meanwhile, EpX2q “ E ˜ n ÿ i“1 X2 i 2 ÿ iăj XiXj ¸ “ n ÿ i“1 EpXiq 2 ÿ iăj EpXiXjq “ np1 ´ pqn´1 2 ˆn 2 ˙ p1 ´ pq2n´3 □ 13The kth moment of a random variable X is EpXkq. 109
189360
https://www.khanacademy.org/science/in-in-class-10-biology/in-in-heredity-and-evolution/x34856011f50c37d5:evolution/v/genetic-drift
Genetic drift (video) | Evolution | Khan Academy Skip to main content If you're seeing this message, it means we're having trouble loading external resources on our website. If you're behind a web filter, please make sure that the domains .kastatic.org and .kasandbox.org are unblocked. Explore Browse By Standards Explore Khanmigo Math: Pre-K - 8th grade Math: High school & college Math: Multiple grades Math: Illustrative Math-aligned Math: Eureka Math-aligned Math: Get ready courses Test prep Science Economics Reading & language arts Computing Life skills Social studies Partner courses Khan for educators Select a category to view its courses Search AI for Teachers FreeDonateLog inSign up Search for courses, skills, and videos Help us do more We'll get right to the point: we're asking you to help support Khan Academy. We're a nonprofit that relies on support from people like you. If everyone reading this gives $10 monthly, Khan Academy can continue to thrive for years. Please help keep Khan Academy free, for anyone, anywhere forever. Select gift frequency One time Recurring Monthly Yearly Select amount $10 $20 $30 $40 Other Give now By donating, you agree to our terms of service and privacy policy. Skip to lesson content NCERT Biology Class 10 Course: NCERT Biology Class 10>Unit 4 Lesson 5: Evolution Evolution & natural selection Genetic drift Acquired vs inherited traits Natural Selection and Genetic drift Acquired vs Inherited Traits Science> NCERT Biology Class 10> Heredity and evolution> Evolution © 2025 Khan Academy Terms of usePrivacy PolicyCookie NoticeAccessibility Statement Genetic drift Google Classroom Microsoft Teams About About this video Transcript Let's explore the concept of genetic drift.Created by Mahesh Shenoy. Skip to end of discussions Questions Tips & Thanks Want to join the conversation? Log in Sort by: Top Voted Ritu Malik 5 years ago Posted 5 years ago. Direct link to Ritu Malik's post “At 2:44, there were only ...” more At 2:44 , there were only blue beetles and red beetles. Can't mutation take place and can't a green beetle be born? Or can there only be blue and red beetles. Answer Button navigates to signup page •Comment Button navigates to signup page (1 vote) Upvote Button navigates to signup page Downvote Button navigates to signup page Flag Button navigates to signup page more Answer Show preview Show formatting options Post answer kaede 2 years ago Posted 2 years ago. Direct link to kaede's post “(Sorry for a late reply) ...” more (Sorry for a late reply) Of course they can! It's not even necessary for it to be green, it could purple, black etc. Any color is possible. When the red beetles were in higher frequency, green and blue beetles existed. The same way when blue/red beetles are in higher frequency, any color beetle can exist due to variation. Comment Button navigates to signup page (1 vote) Upvote Button navigates to signup page Downvote Button navigates to signup page Flag Button navigates to signup page more Video transcript [Instructor] Natural selection is a process in which if there is a trait which has an advantage, meaning higher chances of survival, then automatically, it gets more passed on because more chances of reproducing, and over generation, its number can start increasing, and this is one way in which evolution can happen. And we'll get back to this in a second. But in this video, I wanna talk about a different way in which evolution can take place and this is called genetic drift. So let's find out what genetic drift is. So, I like to think of a genetic drift as evolution by luck. Evolution by luck. And you will see why I say that. So let's get back to our story of beetles. So, in a previous video, we were focusing on this group of beetles, which are mostly red in color, but then due to mutation, some beetles were born differently. Some had green color and some had blue color. And these crows were an important part of the story because they eat on these beetles, and this is where the green beetles had an advantage. They are hard to see even in this picture. They are hard to notice, right? Which means less chance of getting eaten, more chances of their survival, and so more chances of them reproducing and passing on those genes. And so, as time passed on, their numbers started increasing, and that's what we call natural selection. And, of course, this has been explained in more detail in our previous videos on evolution and natural selection, so if you need more clarity, definitely you can go back and watch that. But in this video, let's consider a different scenario. Let's say before natural selection has time to kick in, some calamity strikes. Maybe there's a fire or an earthquake or lightning strike or cyclone, or something like that happens. Let's consider something simple. Let's say some animal comes in, stomps it, and goes. That can totally happen. So let's say an elephant stomps on it, thud! And this kills almost all the beetles. Only a few ones over here survives. What happens next? Well, a couple of things can happen. Now maybe, the crows will eat all of them and they just die out so our population is vanished. That's one scenario. Another scenario could be if they survive, they can start reproducing and repopulating. But now what's interesting to see is there are more blue-colored beetles. That means as time passes by, there's a good chance that more blue-colored beetles will be found. And if we compare this with the previous situation, what do you see? Well, earlier, the red beetles were more in number which means the genes responsible for red color, they were more frequently seen and more frequently being passed along. So red color genes had a higher frequency. But afterwards, see what happened. Now, blue color are in majority. That means their genes are more frequently seen, the genes responsible for blue color. That's in high frequency now which means our beetles have evolved. That's the definition of evolution. When the gene frequency changes over generation in a population, that's what we call as evolution so the beetles have definitely evolved. But think about what caused this evolution. Did they evolve because the blue beetles had some kind of an advantage? Absolutely no. They have no advantage compared to red beetles. At least in this scenario, they can be easily spotted. It's the green ones that had an advantage, right? But the only reason this happened, this whole evolution happened to blue beetles is because in that stampede when that elephant stomped the group of beetles, they just happen to be in the right place at the right time purely by luck and so they survived. Most of the red beetles perished. The green beetles also perished and it is for that reason they evolved this way. And that's why I like to call this evolution by luck. So it's called genetic drift because in this disaster, most of the red genes just drifted away, meaning they just vanished because they died. The green genes, the genes responsible for green color, which would have been naturally selected, they also vanished. They just died and drifted away. So most of these genes drifted away because of some calamity, some disaster. That's why it's called genetic drift. So evolution can also happen. So this is evolution. Let me just write that. This is evolution but it's not caused due to natural selection, it's not because they have an advantage, but it happened purely by chance. And so the basic moral of the story here is that in certain populations, you might see certain traits which are being passed along even though they have absolutely no advantage in that environment. And that happens mainly because of genetic drift, meaning some random event caused all the other genes to just vanish away. Now, before you wind up, one question for you. Do you think genetic drift happens in a large population or in small population? What do you think? Okay, let's see. So let's imagine this was a large population of beetles, say thousands of beetles. And the way I'm gonna show that is I'm just gonna take that same elephant foot and make it smaller. This does not mean that the elephant has become smaller. Think of it as, since we're dealing with large number of beetles, they are occupying a much larger area, so I'm zooming out to show you this. And from that perspective, the elephant foot looks smaller. Think of it that way. Now, in this situation, do you see that there is a very good chance that almost all the genes might survive? Even if the elephant stomps multiple times, think about it. Because they are more spread out over here, when that stampede happens, definitely a lot of red will survive but maybe there were some of these greens were in this corner, some of the greens were in this corner, some of the blues may have been in that corner. So because now there are more in number compared to before, there's a very good chance that all those genes might survive. So yeah, the beetles will die definitely but all these genes might still survive and if they survive this particular catastrophe, then it's the green one that will get naturally selected. Ooh, so what does this mean? This means in large population, we don't expect a genetic drift to happen. Genetic drift only happens in small populations. So let me just write that down. That's super-important. You would expect genetic drift to happen in small populations. Small population. That's pretty much it. So what did we learn in this video? We learned something called genetic drift. It's a process in which random events can make certain genes just drift away and then automatically, the genes that survive that random event will get more passed on and ends up becoming majority in that population. Because of this, in certain population, even though certain traits have absolutely no advantage, they might still be found majority in number. And remember, genetic drift happens in small population. Smaller the population, more chances of having genetic drift. Creative Commons Attribution/Non-Commercial/Share-AlikeVideo on YouTube Up next: video Use of cookies Cookies are small files placed on your device that collect information when you use Khan Academy. Strictly necessary cookies are used to make our site work and are required. Other types of cookies are used to improve your experience, to analyze how Khan Academy is used, and to market our service. You can allow or disallow these other cookies by checking or unchecking the boxes below. You can learn more in our cookie policy Accept All Cookies Strictly Necessary Only Cookies Settings Privacy Preference Center When you visit any website, it may store or retrieve information on your browser, mostly in the form of cookies. This information might be about you, your preferences or your device and is mostly used to make the site work as you expect it to. The information does not usually directly identify you, but it can give you a more personalized web experience. Because we respect your right to privacy, you can choose not to allow some types of cookies. Click on the different category headings to find out more and change our default settings. However, blocking some types of cookies may impact your experience of the site and the services we are able to offer. More information Allow All Manage Consent Preferences Strictly Necessary Cookies Always Active Certain cookies and other technologies are essential in order to enable our Service to provide the features you have requested, such as making it possible for you to access our product and information related to your account. For example, each time you log into our Service, a Strictly Necessary Cookie authenticates that it is you logging in and allows you to use the Service without having to re-enter your password when you visit a new page or new unit during your browsing session. Functional Cookies [x] Functional Cookies These cookies provide you with a more tailored experience and allow you to make certain selections on our Service. For example, these cookies store information such as your preferred language and website preferences. Targeting Cookies [x] Targeting Cookies These cookies are used on a limited basis, only on pages directed to adults (teachers, donors, or parents). We use these cookies to inform our own digital marketing and help us connect with people who are interested in our Service and our mission. We do not use cookies to serve third party ads on our Service. Performance Cookies [x] Performance Cookies These cookies and other technologies allow us to understand how you interact with our Service (e.g., how often you use our Service, where you are accessing the Service from and the content that you’re interacting with). Analytic cookies enable us to support and improve how our Service operates. For example, we use Google Analytics cookies to help us measure traffic and usage trends for the Service, and to understand more about the demographics of our users. We also may use web beacons to gauge the effectiveness of certain communications and the effectiveness of our marketing campaigns via HTML emails. Cookie List Clear [x] checkbox label label Apply Cancel Consent Leg.Interest [x] checkbox label label [x] checkbox label label [x] checkbox label label Reject All Confirm My Choices
189361
https://www.ck12.org/flexi/cbse-math/permutation-and-combination/what-does-the-sum-of-digits-refer-to-in-number-theory/
What does the sum of digits refer to in number theory? - Definition | CK-12 Foundation All Subjects CBSE Math Permutation and Combination Question What does the sum of digits refer to in number theory? Flexi Says: In number theory, the sum of digits of a number refers to the arithmetic sum of its digits. For example, if you have a number 123, the sum of its digits would be 1 + 2 + 3 = 6. This concept is often used in various mathematical problems and proofs. It is also used in the formation of divisibility rules. For example, a number is divisible by 3 if and only if the sum of its digits is divisible by 3. Analogy / Example Try Asking: Describe the division of distinct items into groups.What are the basic properties of permutation?Define the division of identical items into groups. How can Flexi help? By messaging Flexi, you agree to our Terms and Privacy Policy
189362
https://mathbooks.unl.edu/Calculus/sec-4-3-definite-integral.html
Skip to main content 🔗 Section 4.3 The Definite Integral 🔗 Motivating Questions How does increasing the number of subintervals affect the accuracy of the approximation generated by a Riemann sum? What is the definition of the definite integral of a function over the interval ? What does the definite integral measure exactly, and what are some of the key properties of the definite integral? 🔗 Supplemental Videos. 🔗 🔗 The main topics of this section are also presented in the following videos: Introduction to the Definite Integral24 unl.yuja.com/V/Video?v=7114327&node=34303310&a=207063992&autoplay=1 Computing Definite Integrals Using Area25 unl.yuja.com/V/Video?v=7114329&node=34303322&a=23006780&autoplay=1 Properties of the Definite Integral26 unl.yuja.com/V/Video?v=7114328&node=34303288&a=13089772&autoplay=1 Definite Integrals and Symmetry27 unl.yuja.com/V/Video?v=7114330&node=34303304&a=111120763&autoplay=1 Definite Integrals and Averages28 unl.yuja.com/V/Video?v=7114326&node=34303260&a=150504388&autoplay=1 🔗 In Figure 4.40, we see evidence that increasing the number of rectangles in a Riemann sum improves the accuracy of the approximation of the net signed area bounded by the given function. 🔗 We therefore explore the natural idea of allowing the number of rectangles to increase without bound. In an effort to compute the exact net signed area we also consider the differences among left, right, and middle Riemann sums and the different results they generate as the value of increases. We begin with functions that are exclusively positive on the interval under consideration. 🔗 Subsection 4.3.1 The definition of the definite integral 🔗 🔗 In Figure 4.40, we saw that as the number of rectangles got larger and larger, the values of converge to some value. It turns out that any continuous function on an interval , and will converge to the same value as grows larger. In fact, for any continuous function and a Riemann sum using any point in the interval , will converge to that same value. Thus, as we let , it doesn’t really matter where we choose to evaluate the function within a given subinterval, because . 🔗 The fact that these limits always exist (and share the same value) when is continuous29 It turns out that a function need not be continuous in order to have a definite integral. For our purposes, we assume that the functions we consider are continuous on the interval(s) of interest. It is straightforward to see that any function that is piecewise continuous on an interval of interest will also have a well-defined definite integral. allows us to make the following definition. 🔗 🔗 🔗 The definite integral of a continuous function on the interval , denoted , is the real number given by , 🔗 where , (for ), and satisfies (for ). 🔗 We call the symbol the integral sign, the values and the limits of integration, and the function the integrand The process of determining the real number is called evaluating the definite integral. While there are several different interpretations of the definite integral, for now the most important is that measures the net signed area bounded by and the -axis on the interval . 🔗 🔗 For example, if is the function pictured in Figure 4.41, and , , and are the exact areas bounded by and the -axis on the respective intervals , , and , then and . 🔗 🔗 We can also use definite integrals to express the change in position and the distance traveled by a moving object. If is a velocity function on an interval , then the change in position of the object, , is given by . 🔗 If the velocity function is nonnegative on , then tells us the distance the object traveled. If the velocity is sometimes negative on , we can use definite integrals to find the areas bounded by the function on each interval where does not change sign, and the sum of these areas will tell us the distance the object traveled. 🔗 To compute the value of a definite integral from the definition, we have to take the limit of a sum. While this is possible to do in select circumstances, it is also tedious and time-consuming, and does not offer much additional insight into the meaning or interpretation of the definite integral. Instead, in Section 4.4, we will learn the Fundamental Theorem of Calculus, which provides a shortcut for evaluating a large class of definite integrals. This will enable us to determine the exact net signed area bounded by a continuous function and the -axis in many circumstances. 🔗 For now, our goal is to understand the meaning and properties of the definite integral, rather than to compute its value. To do this, we will rely on the net signed area interpretation of the definite integral. So we will use as examples curves that produce regions whose areas we can compute exactly through area formulas. We can thus compute the exact value of the corresponding integral. 🔗 🔗 For instance, if we wish to evaluate the definite integral , we observe that the region bounded by this function and the -axis is the trapezoid shown in Figure 4.42. By the formula for the area of a trapezoid, , so . 🔗 Subsection 4.3.2 Some properties of the definite integral 🔗 Regarding the definite integral of a function over an interval as the net signed area bounded by and the -axis, we discover several standard properties of the definite integral. It is helpful to remember that the definite integral is defined in terms of Riemann sums, which consist of the areas of rectangles. 🔗 For any real number and the definite integral it is evident that no area is enclosed, because the interval begins and ends with the same point. Hence, 🔗 🔗 If is a continuous function and is a real number, then . 🔗 🔗 Next, we consider the result of subdividing the interval of integration. In Figure 4.43, we see that and , 🔗 which illustrates the following general rule. 🔗 🔗 🔗 If is a continuous function and , , and are real numbers, then . 🔗 While this rule is easy to see if , it in fact holds in general for any values of , , and . Another property of the definite integral states that if we reverse the order of the limits of integration, we change the sign of the integral’s value. 🔗 🔗 🔗 If is a continuous function and and are real numbers, then . 🔗 This result makes sense because if we integrate from to , then in the defining Riemann sum we set , while if we integrate from to , we have , and this is the only change in the sum used to define the integral. 🔗 🔗 There are two additional useful properties of the definite integral. When we worked with derivative rules in Chapter 2, we formulated the Constant Multiple Rule and the Sum Rule. Recall that the Constant Multiple Rule says that if is a differentiable function and is a constant, then , 🔗 and the Sum Rule says that if and are differentiable functions, then . 🔗 These rules are useful because they allow to deal individually with the simplest parts of certain functions by taking advantage of addition and multiplying by a constant. In other words, the process of taking the derivative respects addition and multiplying by constants in the simplest possible way. 🔗 It turns out that similar rules hold for the definite integral. First, let’s consider the functions pictured in Figure 4.44. 🔗 Because multiplying the function by 2 doubles its height at every -value, we see that the height of each rectangle in a left Riemann sum is doubled, for the original function, versus in the doubled function. For the areas and , it follows . As this is true regardless of the value of or the type of sum we use, we see that in the limit, the area of the red region bounded by will be twice the area of the blue region bounded by . As there is nothing special about the value compared to an arbitrary constant , the following general principle holds. 🔗 Constant Multiple Rule. 🔗 🔗 If is a continuous function and is any real number, then . 🔗 We see a similar situation with the sum of two functions and . 🔗 If we take the sum of two functions and at every point in the interval, the height of the function is given by . Hence, for the pictured rectangles with areas , , and , it follows that . Because this will occur for every such rectangle, in the limit the area of the gray region will be the sum of the areas of the blue and red regions. In terms of definite integrals, we have the following general rule. 🔗 Sum Rule. 🔗 🔗 If and are continuous functions, then . 🔗 🔗 The Constant Multiple and Sum Rules can be combined to say that for any continuous functions and and any constants and , . Example 4.46. Suppose that the following information is known about the functions , , , and : ; ; ; ; Use the provided information and the rules discussed in the preceding section to evaluate each of the following definite integrals. Hint. Note that the value of is given. Use the values of and . First find and . Use the sum and constant multiple rules. First write . Answer. . . . . . Solution. Note that the value of is given, and thus . 2. Since and , we have . 3. First, using work from and similar to that in (c), we find and , thus by the sum rule, . 4. By the sum and constant multiple rules, . 5. First, we write . Then, using the sum and constant multiple rules, it follows . Example 4.47. Use known geometric formulas and the net signed area interpretation of the definite integral to evaluate each of the definite integrals below. , where is the function pictured in Figure 4.48. Assume that each portion of is either part of a line or part of a circle. Hint. Sketch the region bounded by and the -axis on . Sketch the region bounded by and the -axis on . Observe that is the top half the circle whose equation is . Use known formulas for the area of a triangle, square, or circle appropriately. Answer. . . . . Solution. Because and the -axis bound a triangle with base of length 1 and height on the interval , it follows that . 2. For , we first sketch the region bounded by the function, as shown below. The line creates two triangles, one with area and the other with area . Since the latter area corresponds to a region below the -axis, we associate a negative sign to it, and hence find that . 3. For , we simply observe that this function is the top half of a circle of radius 1, and thus the bounded region is a semicircle of radius 1, thus having an area of . Therefore, . 4. Finally, for , where is the function pictured in Figure 4.48, we consider the function on seven consecutive subintervals of length 1. Observe that on , the bounded area is . On , the area is . Similarly, on the next five subintervals of length 1, the areas bounded are respectively , , , , and . Thus, the value of the integral is , which is approximately 0.8562. 🔗 Subsection 4.3.3 How the definite integral is connected to a function’s average value 🔗 🔗 One of the most valuable applications of the definite integral is that it provides a way to discuss the average value of a function, even for a function that takes on infinitely many values. Recall that if we wish to take the average of numbers , , , , we compute . 🔗 🔗 Since integrals arise from Riemann sums in which we add values of a function, it should not be surprising that evaluating an integral is similar to averaging the output values of a function. Consider, for instance, the right Riemann sum of a function , which is given by . 🔗 🔗 Since , we can thus write .(4.1) 🔗 🔗 We see that the right Riemann sum with subintervals is just the length of the interval times the average of the function values found at the right endpoints. And just as with our efforts to compute area, the larger the value of we use, the more accurate our average will be. Indeed, we will define the average value of on to be . 🔗 🔗 But we also know that for any continuous function on , taking the limit of a Riemann sum leads precisely to the definite integral. That is, , and thus taking the limit as in Equation (4.1), we have that .(4.2) 🔗 Solving Equation (4.2) for , we have the following general principle. 🔗 Average value of a function. 🔗 🔗 If is a continuous function on , then its average value on is given by the formula . 🔗 Equation (4.2) tells us another way to interpret the definite integral: the definite integral of a function from to is the length of the interval times the average value of the function on the interval. In addition, when the function is nonnegative on , Equation (4.2) has a natural visual interpretation. 🔗 Consider Figure 4.49, where we see at left the shaded region whose area is , at center the shaded rectangle whose dimensions are by , and at right these two figures superimposed. Note that in dark green we show the horizontal line . Thus, the area of the green rectangle is given by , which is precisely the value of . The area of the blue region in the left figure is the same as the area of the green rectangle in the center figure. We can also observe that the areas and in the rightmost figure appear to be equal. Thus, knowing the average value of a function enables us to construct a rectangle whose area is the same as the value of the definite integral of the function on the interval. The java applet30 David Austin, at gvsu.edu/s/az provides an opportunity to explore how the average value of the function changes as the interval changes, through an image similar to that found in Figure 4.49. Example 4.50. Suppose that tells us the instantaneous velocity of a moving object on the interval , where is measured in minutes and is measured in meters per minute. Sketch an accurate graph of . What kind of curve is ? Evaluate exactly. In terms of the physical problem of the moving object with velocity , what is the meaning of ? Include units in your answer. Determine the exact average value of on . Include units in your answer. Sketch a rectangle whose base is the line segment from to on the -axis such that the rectangle’s area is equal to the value of . What is the rectangle’s exact height? How can you use the average value you found in (d) to compute the total distance traveled by the moving object over ? Hint. Note that is part of the curve given by . What familiar shape is generated by the curve ? Recall the meaning of the area bounded by a nonnegative velocity function on a given interval. From the meaning of the average value of a function, we know AVG. Consider a key recent figure in the text. Distance equals average rate times . Answer. is the top half of the circle , which has radius 2 and is centered at . . The object moved meters in 4 minutes. AVG, meters per minute,. The height of the rectangle is the average value of , AVG. . Solution. The curve is the top half of the circle , which has radius 2 and is centered at . Thus, the value of is the area of a semicircle of radius 2, which is . Because the velocity is always nonnegative in this problem, the meaning of is both the distance traveled and the change in position of the object on the interval . Specifically, the object moved meters in 4 minutes. We know AVG, so AVG, which is measured in meters per minute, since the units of “4” are minutes and of “” are meters. 5. Constructing a figure similar to those shown in this section on the topic of average value of a function, we find the following, which demonstrates a rectangle having the same area as the semicircle. The height of the rectangle is the average value of , specifically AVG. 6. Knowing that average velocity is , it follows from the fact that distance traveled equals average rate times time (provided velocity is always nonnegative), we have . We see from (c) or (f) that we are simply considering the situation from two different perspectives: if we know the distance traveled, we can find average velocity, or if we know average velocity, we can find distance traveled. 🔗 Subsection 4.3.4 Summary 🔗 Any Riemann sum of a continuous function on an interval provides an estimate of the net signed area bounded by the function and the horizontal axis on the interval. Increasing the number of subintervals in the Riemann sum improves the accuracy of this estimate, and letting the number of subintervals increase without bound results in the values of the corresponding Riemann sums approaching the exact value of the enclosed net signed area. When we take the limit of Riemann sums, we arrive at what we call the definite integral of over the interval . In particular, the symbol denotes the definite integral of over , and this quantity is defined by the equation , where , (for ), and satisfies (for ). The definite integral measures the exact net signed area bounded by and the horizontal axis on ; in addition, the value of the definite integral is related to what we call the average value of the function on : AVG. In the setting where we consider the integral of a velocity function , measures the exact change in position of the moving object on ; when is nonnegative, is the object’s distance traveled on . The definite integral is a sophisticated sum, and thus has some of the same natural properties that finite sums have. Perhaps most important of these is how the definite integral respects sums and constant multiples of functions, which can be summarized by the rule where and are continuous functions on and and are arbitrary constants. 🔗 Exercises 4.3.5 Exercises 🔗 1. Evaluating definite integrals from graphical information. 🔗 Use the following figure, which shows a graph of to find each of the indicated integrals. 🔗 (Click on the graph for a larger version.) 🔗 Note that the first area (with vertical, red shading) is 55 and the second (with oblique, black shading) is 5. 🔗 A. 🔗 B. 🔗 C. 🔗 D. 🔗 2. Estimating definite integrals from a graph. 🔗 Use the graph of shown below to find the following integrals. 🔗 (Click on the graph for a larger version.) 🔗 A. 🔗 B. If the vertical red shaded area in the graph has area , estimate: 🔗 (Your estimate may be written in terms of .) 🔗 3. Finding the average value of a linear function. 🔗 Find the average value of over 🔗 average value = 🔗 4. Finding the average value of a function given graphically. 🔗 The figure below to the left is a graph of , and below to the right is . | | | --- | | | | | | | 🔗 (a) 🔗 What is the average value of on ? 🔗 avg value = 🔗 (b) 🔗 What is the average value of on ? 🔗 avg value = 🔗 (c) 🔗 What is the average value of on ? 🔗 avg value = 🔗 (d) 🔗 🔗 Is the following statement true? AverageAverage(g)Average 🔗 5. Estimating a definite integral and average value from a graph. 🔗 Use the figure below, which shows the graph of , to answer the following questions. 🔗 (Click on the graph to get a larger version.) 🔗 A. Estimate the integral: 🔗 (You will certainly want to use an enlarged version of the graph to obtain your estimate.) 🔗 B. Which of the following average values of is larger? 🔗 Between and Between and 🔗 6. Using rules to combine known integral values. 🔗 Suppose . 🔗 🔗 🔗 7. Using definite integrals on a velocity function. The velocity of an object moving along an axis is given by the piecewise linear function that is pictured in Figure 4.51. Assume that the object is moving to the right when its velocity is positive, and moving to the left when its velocity is negative. Assume that the given velocity function is valid for to . 🔗 Write an expression involving definite integrals whose value is the total change in position of the object on the interval . Use the provided graph of to determine the value of the total change in position on . Write an expression involving definite integrals whose value is the total distance traveled by the object on . What is the exact value of the total distance traveled on ? What is the object’s exact average velocity on ? Find an algebraic formula for the object’s position function on that satisfies . 🔗 8. Riemann sum estimates and definite integrals. Suppose that the velocity of a moving object is given by , measured in feet per second, and that this function is valid for . Write an expression involving definite integrals whose value is the total change in position of the object on the interval . Use appropriate technology (such as Marc Renault, Shippensburg University. ) to compute Riemann sums to estimate the object’s total change in position on . Work to ensure that your estimate is accurate to two decimal places, and explain how you know this to be the case. 3. Write an expression involving definite integrals whose value is the total distance traveled by the object on . 4. Use appropriate technology to compute Riemann sums to estimate the object’s total distance travelled on . Work to ensure that your estimate is accurate to two decimal places, and explain how you know this to be the case. 5. What is the object’s average velocity on , accurate to two decimal places? 🔗 9. Using the Sum and Constant Multiple Rules. Consider the graphs of two functions and that are provided in Figure 4.52. Each piece of and is either part of a straight line or part of a circle. 🔗 Determine the exact value of . Determine the exact value of . Find the exact average value of on . For what constant does the following equation hold? 🔗 10. Finding the area of a bounded region. Let and . On the interval , sketch a labeled graph of and write a definite integral whose value is the exact area bounded by on . On the interval , sketch a labeled graph of and write a definite integral whose value is the exact area bounded by on . Write an expression involving a difference of definite integrals whose value is the exact area that lies between and on . Explain why your expression in (c) has the same value as the single integral . Explain why, in general, if for all in , the exact area between and is given by . PrevTopNext
189363
https://www.geogebra.org/m/pHajWs2u
Dynamic demonstration of dividing the area of a perfect polygon, to several parts of equal area – GeoGebra Google Classroom GeoGebra Classroom Sign in Search Google Classroom GeoGebra Classroom Home Resources Profile Classroom App Downloads Dynamic demonstration of dividing the area of a perfect polygon, to several parts of equal area Author:Idan Tal Topic:Area New Resources גיליון אלקטרוני להעלאת נתוני בעיה ויצירת גרף בהתאם Colors for GeoGebra Question 6c רישום חופשי [z`]]]( Discover Resources The magical flight of starlings A peculiarity of the helix tangent Translation #8 Exploration Task - #2 Mini-Quiz #18 Algebra 2 Evaluating Logarithmic Functions Discover Topics Parametric Curves Division Polygons Multiplication Continuity AboutPartnersHelp Center Terms of ServicePrivacyLicense Graphing CalculatorCalculator SuiteMath Resources Download our apps here: English / English (United States) © 2025 GeoGebra®
189364
https://apologeticspress.org/refuting-the-miller-urey-experiment/
Skip to content Apologetics Press Menu Print Share Tweet Email Refuting the Miller-Urey Experiment Joshua Kee, M.S. Creation vs. Evolution Alleged Human Evolution From Issue: R&R – Issue 44 #10 On May 15, 1953, Science magazine published an article by Stanley L. Miller that transformed the scientific field of origins. This article was titled “A Production of Amino Acids under Possible Primitive Earth Conditions” and described the experiment (designed by graduate student Miller and his advisor, Harold Urey) as attempting to replicate the emergence of life from prebiotic1 soup. The results of this experiment sparked newspapers to make statements such as “life from non-life.” The Miller Experiment results were viewed as an alternative theory to the intelligent design movement and bridged the barriers to the understanding of the origin of life. This experiment also caused an increased interest in stories such as Mary W. Shelley’s Frankenstein, where dead bodies were resurrected using electricity. Jeffrey L. Bada and Antonio Lazcano said that this experiment “almost overnight transformed the study of the origin of life into a respectable field of inquiry.”2 This experiment also introduced a new field of study: prebiotic chemistry. Current biology textbooks still use Miller’s experiments as a basis for the origin of life on Earth, describing it as a “famous”3 and “elegant experiment.”4 What Is Life? Before we consider this experiment about the origin of life, let’s consider the definition for “life.” Morris, et al. give four essential characteristics for living things: an archive of information, a barrier that separates the living thing from the environment, capacity to regulate cell interiors, and the ability to gather materials and harness energy from the environment.5 Urry, et al. gave examples of some of the properties of life: order, evolutionary adaptation, regulation, energy processing, growth and development, response to the environment, and reproduction.6 These characteristics or properties of life must exist together for something to be considered a living organism. The information about how any living organism is constructed is contained inside the organism’s cells on strands of DNA (deoxyribonucleic acid) and RNA (ribonucleic acid), which consist of specific arrangements of five nucleic acids: adenine, guanine, cytosine, thymine (found only in DNA) and uracil (found only in RNA). This information is used by the cell to construct and organize proteins, which are made from molecules called amino acids and are arranged in specific sequences and three-dimensional patterns. Proteins are necessary for the structural arrangement of the cell and the many metabolic processes required for life. Science magazine attributed the explanation for the origin of this complexity to the discovery by Miller and Urey, specifically the origin of the amino acids that are the basis for the proteins in the cell.7 Miller’s Experiment It was originally supposed that organic compounds, those compounds that contain the element carbon and are found in living organisms (for example, DNA or proteins), were only able to be made, or synthesized, by living organisms themselves. In the same way, inorganic molecules—those molecules that do not contain carbon—were only able to originate from non-living sources. However, in an essay published in Science magazine, Bada and Lezcano8 state that the scientist who first reported synthesizing a simple organic compound from inorganic molecules was F. Wöhler in 1828.9 Bada and Lezcano also stated that, in 1913, W. Löb reported that he had synthesized the first simple amino acids using wet formamide, a silent discharge of electricity, and ultraviolet light. In 1950, Melvin Calvin attempted to synthesize organic compounds in oxidizing atmospheric conditions.10 He was able to synthesize a high volume of formic acid,11 however, he demonstrated the necessity of running these experiments in a reducing atmosphere.12 In 1951, Harold Urey presented his concept of a prebiotic, reducing atmosphere from his studies of the origin of the Universe. In 1953, Miller,13 a graduate student at the University of Chicago, developed an apparatus to form basic organic compounds. He used CH4 (methane), NH3 (ammonia), H2O (water), and H2 (hydrogen) and circulated them through an electrical discharge for a week. After the experiment was run, he added HgCl2 (mercuric chloride) to prevent the growth of living organisms, distilled the results, and positively identified the amino acids glycine, 𝛼-alanine, -alanine and less certainly identified some other amino acids. These amino acids provide part of the foundation for proteins, the building blocks of life. Later analysis of samples from Miller’s work revealed over 40 different amino acids and amines.14 If the conclusions from Miller’s experiment violated established laws of science, however, or if he based the experiment upon faulty assumptions, then his experiment is invalid evidence for abiogenesis.15 While Miller made a profound discovery, the unsubstantiated conclusion that he and others drew from his work ignored established science and made several assumptions that cannot be supported. Contradiction of Scientific Laws The purpose, conclusion, and application of Miller’s experiment contradicted firmly established laws of science: theories that have “been tested by and [are] consistent with generations of data.”16 Even now, more than half a century after Miller’s experiment, these are still considered law. One is the Law of Biogenesis: the fact that life cannot come from non-life—there must be pre-existing life. This thought was expressed by Rudolf Virchow in 1855: “Omnis cellula e cellula,” or, “Every cell from a cell.”17 The Law of Biogenesis is based on work by Francesco Redi, Lazzaro Spallanzani, and Louis Pasteur. The hypothesis that Miller was testing was in contradiction to this already established Law and therefore, as expected, the experiment failed to support his hypothesis. This is a law based upon exclusion: abiogenesis has never been observed. Scientists do not know exactly how life could have come about from non-life. They have never replicated it in a laboratory. They have never seen signs of abiogenesis inside or outside the lab. So, there is no evidence for life coming from anything other than life. Does Miller’s experiment nullify the Law of Biogenesis? No, it only strengthens it. Even in the orderly and precise conditions found in a laboratory, scientists have not been able to create life from non-life, and yet it is assumed by naturalists that it happened in the disorganized prebiotic world. Another scientific law that is ignored by Miller’s experiment is the Second Law of Thermodynamics. This law states that, “in any chemical or physical process, the entropy of the Universe tends to increase.”18 To put it another way, the Universe is continuing to become more disorganized. An analogy of this scientific law is a tornado going through a plane graveyard: instead of making new planes, it will cause greater damage to the junked planes. The objective of Miller’s experiment was to provide evidence that the Universe, at one point, went from disorder (prebiotic soup) to order (amino acids, DNA, then life), which would seem to break this Law. While it is true that, in an open system (like Earth), useful energy can be added from without, allowing entropy to be countered locally in some cases, that energy has to be of such a nature that it can, in fact, counter entropy in the particular system under consideration (rather than increasing entropy). No evidence has been presented to substantiate the conjecture that entropy was countered at the molecular and genetic level at the beginning of life (or each of the evolutionary jumps thereafter).19 Instead, genetic entropy is the rule.20 The contradiction of Miller’s results with these two scientific laws were not addressed. False Conclusions and Assumptions Miller addressed the hypothesis of early formation of organic compounds that would serve as the basis of life. However, it must be understood that his experiment resulted in forming only some of the clay to make the house of life. Amino acids are the foundation for proteins, the building blocks of life. The amino acids must be combined in a precise way and be able to replicate themselves perfectly, following the genetic code of DNA. The DNA is transcribed into RNA, which is translated into a protein. Some of the proteins are required for the maintenance and replication of DNA. You cannot have functional DNA without proteins, nor vice versa. Irreducible complexity is a concept that has been suggested by Michael Behe, a professor of biochemistry at Lehigh University. It is the idea that a living organism must have a minimum number of working processes. If the organism was ever missing one of these processes, or if one was faulty, the organism could not live.21 If the amino acids did not combine in the right order (and, therefore, did not produce that minimum number of working processes), then they would not be able to continue replicating themselves. Miller addressed this concept in a response to Sidney W. Fox’s letter to Science magazine in 1959 by saying, “it would be convenient for the investigator if the primitive pathways followed the present ones, but surely this is not necessary…. If we choose the pathway of the more primitive organism, then why should not even more primitive organisms have used pathways different from these?”22 Miller is implying that there are reducibly complex organisms with simpler and simpler metabolic pathways until you just have a string of random amino acids. These reducibly complex organisms and simpler metabolic pathways are ideas conceived only in the human mind, and do not have any scientific evidence for their existence. So, even though Miller’s experiment resulted in some of the building blocks (amino acids) for the building of life (proteins), his experiment did not create life itself, nor show how it could have evolved from the random amino acids. Additionally, the amino acids made in Miller’s apparatus were a racemic, or equal proportions, mixture of right- and left-handed amino acids, specific orientations that are mirror images of each other.23 Miller and Urey bring this to light in their defense against bacterial contamination. However, life is comprised almost entirely of left-handed amino acids.24 The results of this experiment show that abiotic synthesis of organic molecules does not produce the necessary configuration for life, nor does it explain how life is comprised mainly of only one orientation of amino acids. A different problem with Miller’s experiment is the assumption that was made based on the uniformitarian25 concept of the Universe. Miller and Urey co-authored an article that brought out several uniformitarian assumptions that they made—assumptions that would directly affect the plausibility of the abiogenesis hypothesis. For instance, they said, “there is no reason to suppose that the same temperature [we experience on Earth today—JK] was not present in the past.”26 Looking at the geological record, however, we find that there have been cycles of cooling and warming. The varying temperatures would affect the composition of the prebiotic atmosphere as well as the stability of any organic molecules formed. The assumption of uniformitarian conditions cannot be validated. Miller and Urey further attempted to explain the current buffer systems of the ocean to show that the pH level of the ocean in the past was suitable for life to originate. The pH of the ocean at the time is argued to have been 8, making it ideal for the stability of ammonia that allows for hydrogen to escape the atmosphere, which allows for a reducing atmosphere. They present their calculations as sound, yet proceed to admit that they are invalid: It is evident that the calculations do not have a quantitative validity because of many uncertainties with respect to temperature, the processes by which equilibrium could be approached, the atmospheric level at which such processes would be effective, and the partial pressure of hydrogen required to provide the necessary rate of escape. In view of these uncertainties, further calculations are unprofitable at the present time. However, we can conclude from this discussion that a reducing atmosphere containing low partial pressures of hydrogen and ammonia and a moderate pressure of methane and nitrogen constitutes a reasonable atmosphere for the primitive earth. That this was the case is not proved by our arguments….27 Miller and Urey conclude that nothing can be determined about the oceanic and atmospheric conditions because of a lack of evidence. A final problem with Miller’s experiment is the composition of the atmospheric conditions that he used. Miller used methane, ammonia, water, and hydrogen as the assumed atmospheric composition when life originated, based on the works of Urey and Oparin. Miller and Urey said that only by using a reducing atmosphere could amino acids be synthesized. They affirmed that, “if the conditions were oxidizing, no amino acids were synthesized.”28 Miller and Urey also concluded that oxygen was not necessary to the early atmosphere because it is not essential for life. Regarding the experimental synthesis of life in an oxidizing atmosphere, they said that the experiments could “be interpreted to mean that it would not have been possible to synthesize organic compounds nonbiologically as long as oxidizing conditions were present on the earth.”29 So, was the prebiotic atmosphere a reducing atmosphere or an oxidizing atmosphere? In their book, The Origins of Life on Earth, Stanley L. Miller and Leslie E. Orgel described their reasoning behind having a prebiotic, reducing atmosphere: “We believe that there must have been a period when the earth’s atmosphere was reducing, because the synthesis of compounds of biological interest takes place only under reducing conditions.”30 They continue to say that there is some geological and geophysical evidence that suggests that the early atmosphere was reducing and conclude, “Fortunately, everyone agrees that although the primitive atmosphere may not have been strongly reducing, it certainly did not contain more than a trace of molecular oxygen.”31 Their circular reasoning is that life originated in a reducing atmosphere and that we know there is a reducing atmosphere because life had to originate in it. However, Philip H. Abelson of the Geophysical Laboratory asked, and answered, “What is the evidence for a primitive methane-ammonia atmosphere on earth? The answer is that there is no evidence for it, but much against it.”32 He references Rubey, a member of the U.S. Geological Survey, in saying that volcanic gases, which are thought to have been abundant when life originated, would be similar to the composition of the atmosphere near the Earth: water, carbon dioxide, and nitrogen. Abelson continues, stating that the early atmosphere was reducing, but not to the extent to which Miller believed. It is thought that there was carbon monoxide (oxidizing agent) from the outgassing that was transformed into formate.33 However, the partial pressure of the carbon monoxide would still be high enough to interact with any amino acids that were developed. So, there were oxidizing agents in the prebiotic air. However, we cannot know for certain what the partial pressure was in the early atmosphere. Jonathan Wells, a molecular and cell biologist with a doctorate from the University of California at Berkeley, was quoted in an interview with Lee Strobel discussing the effects of the Miller experiment using the atmosphere presumed now to be the prebiotic atmosphere (carbon dioxide, nitrogen, and water vapor). Wells stated that the results of such an atmosphere would be formaldehyde and cyanide: a poison and embalming fluid.34 The end result is not anything like what Miller proposed. Conclusion Does the Miller experiment show that life can come from non-life? No, it only shows that some of the basic building blocks of life can be made in a specifically designed experimental apparatus. The evidence is too great against the assumptions made in Miller’s experiment. For Miller and Urey to describe their own work as uncertain on many levels, unproven by their arguments, and unprofitable to continue studying, it establishes the truth that there is not a reason to believe the validity or soundness of Miller’s proposition. Since Miller’s experiment proposed the violation of established laws of science and was based upon faulty assumptions, his experiment is invalid evidence for abiogenesis. The rational conclusion from the evidence is still as clear as it was before the Miller-Urey Experiment: the existence of life demands a Creator. Endnotes 1 Prebiotic: “Of or relating to the conditions prevailing on earth before the appearance of living things”—The American Heritage Medical Dictionary (2022), 2 Jeffrey L. Bada and Antonio Lazcano (2003), “Perceptions of Science: Prebiotic Soup—Revisiting the Miller Experiment,” Science, 300:745-746. 3 T.W. Graham Solomons and Craig B. Fryhle (2011), Organic Chemistry (Hoboken, NJ: Wiley Publishing Company), 10th edition, p. 30. 4 James Morris, et al. (2019), Biology: How Life Works (New York: MacMillan Learning), p. 45. 5 Ibid., p. 25. 6 Lisa A. Urry, et al. (2014), Biology (Hoboken, NJ: Pearson Education), p. 3. 7 Bada and Lazcano, p. 746. 8 Ibid, p. 745. 9 Friedrich Wöhler (1828), “Ueber Kunstliche Bildung Des Harnstoffs,” Annalen Der Physik Und Chemie, 88:253-256. 10 Oxidizing atmospheric conditions: current atmospheric conditions, containing free oxygen and hydroxide ions. 11 Formic acid: “a colourless, corrosive, fuming liquid with a pungent smell…Formula: HCOOH” (W.G. Hale, V.A. Saunders, and J.P. Margham (2005), Collins Dictionary of Biology (London: Collins),3rd edition. 12 Reducing atmosphere: an atmosphere with a lessened amount of oxygen, or other oxidizing gases, and contains a higher amount of reducing gases, such as hydrogen and carbon monoxide. This is different from the oxidizing atmosphere in the world today. 13 Stanley L. Miller (1953), “A Production of Amino Acids under Possible Primitive Earth Conditions,” Science, 117:528-529. 14 Jeffrey L. Bada (2013), “New insights into prebiotic chemistry from Stanley Miller’s spark discharge experiments,” Chemical Society Reviews, 42:2186. 15 Abiogenesis: “The supposed development of living organisms from nonliving matter”—The American Heritage Medical Dictionary (2022), 16 Jay L. Wile and Marilyn F. Durnell (2002), Exploring Creation with Biology (Cincinnati, OH: Apologia Educational Ministries, Inc.), p. 559. 17 Urry, et al., p. 234. 18 David L. Nelson and Michael M. Cox (2008), Principles of Biochemistry (New York: W.H. Freeman), 5th edition, p. G-14. 19 Jeff Miller (2013), “Can’t Order Come from Disorder Due to the Sun?” Reason & Revelation, 34:22-23. 20 Jeff Miller (2014), “God and the Laws of Science: Genetics vs. Evolution (Part 2),” Reason & Revelation, 34:14-22. 21 Michael J. Behe (1996), Darwin’s Black Box: The Biochemical Challenge to Evolution (New York: Free Press), p. 39. 22 Sidney W. Fox, et al. (1959), “Origin of Life,” Science, 130:1624. 23 Stanley L. Miller and Harold C. Urey (1959), “Organic Compound Synthesis on the Primitive Earth,” Science, 130:248. 24 Solomons and Fryhle (2011), p. 8. 25 Uniformitarianism: “Principle that geologic processes operating at present are the same processes that operated in the past”—Charles C. Plummer, Diane H. Carlson, and David McGeary (2007), Physical Geology (New York: McGraw-Hill), 11th edition, p. G-10. 26 Miller and Urey, p. 246. 27 Ibid., p. 247. 28 Ibid., p. 248. 29 Ibid., p. 245. 30 Stanley L. Miller and Leslie E. Orgel (1974), The Origins of Life on Earth (Englewood Cliffs, NJ: Prentice-Hall, Inc.), p. 33. 31 Ibid. 32 P.H. Abelson (1966), “Chemical Events on the Primitive Earth,” Proceedings of the National Academy of Sciences, 55:1365, italics in orig. 33 Ibid., p. 1367. 34 Lee Strobel (2004), The Case for a Creator (Grand Rapids, MI: Zondervan Publishing House), pp. 37-38. Published REPRODUCTION & DISCLAIMERS: We are happy to grant permission for this article to be reproduced in part or in its entirety, as long as our stipulations are observed. Reproduction Stipulations→ You May Also Like Creation vs. Evolution ### The 100th Anniversary of the Scopes “Monkey” Trial Eric Lyons Creation vs. Evolution ### 4 Reasons to Believe Evolution is NOT True Jeff Miller Creation vs. Evolution ### Does the Evidence REALLY Support Human Evolution? (Part II) Jeff Miller ©2025 Apologetics Press Terms of Use Privacy Policy Site by Steed AP in Your Inbox Sign up for AP’s weekly newsletter. Hide popup forever.
189365
https://www.goodreads.com/book/show/115013.The_Ants
Jump to ratings and reviews Rate this book The Ants Bert Hölldobler , Edward O. Wilson 836 ratings 35 reviews Rate this book This landmark work, the distillation of a lifetime of research by the world's leading myrmecologists, is a thoroughgoing survey of one of the largest and most diverse groups of animals on the planet. Hölldobler and Wilson review in exhaustive detail virtually all topics in the anatomy, physiology, social organization, ecology, and natural history of the ants. In large format, with almost a thousand line drawings, photographs, and paintings, it is one of the most visually rich and all-encompassing views of any group of organisms on earth. It will be welcomed both as an introduction to the subject and as an encyclopedia reference for researchers in entomology, ecology, and sociobiology. GenresScienceNonfictionBiologyAnimalsNatureEnvironmentNatural History 745 pages, Hardcover First published March 28, 1990 45 people are currently reading 2993 people want to read About the author Bert Hölldobler 14 books78 followers Bert Hölldobler is Foundation Professor at Arizona State University and the recipient of numerous awards, including the Pulitzer Prize and the Gottfried Wilhelm Leibniz Prize. He lives in Arizona and Germany. The Idea Factory: Bell Labs and the Great Age of American Innovation Jon Gertner Genghis Khan and the Making of the Modern World Jack Weatherford Army Ants: Nature’s Ultimate Social Hunters Daniel J.C. Kronauer One Day, Everyone Will Have Always Been Against This Omar El Akkad The Sting of the Wild Justin O. Schmidt 4.09 923 Conclave Robert Harris 4.11 73.7k Reliquary Douglas Preston 4.04 49.7k All similar books Ratings & Reviews What do you think? Rate this book Friends & Following Create a free account to discover what your friends think of this book! Community Reviews 4.60 836 ratings 35 reviews 5 stars 618 (73%) 4 stars 142 (16%) 3 stars 49 (5%) 2 stars 15 (1%) 1 star 12 (1%) Displaying 1 - 30 of 35 reviews Mario the lone bookwolf 805 reviews 5,293 followers July 20, 2021 With all the pictures and concentrated knowledge, the antiliation of the reader can begin and I must confess, I might be biased because I am a bit of a myrmecological fanboy. Eusociality in insects is, the great to terrifying, trait of organizing huge states without individuality, rudimentary intelligence of single insects and many not understood ways of automating and perfecting each process from war to logistics and breeding. It comes closer to a fully automated system, a machine, a bio-digital fusion of perfection than to a living organism. The similarities to human societies, especially to totalitarian dictatorships, are so immense that it seems to be a nice philosophical mind game to think of what combinations and differences may arise from more anty humans and more social ants. I am looking forward to seeing those many open questions solved, new species discovered, the knowledge, wisdom and high organization of ants used for algorithms except for swarm algorithms for automated warfare and in general for more interest in nature, just great books like this one can strengthen. Reading Hölldobler's other anty books about leafcutter ants and superorganisms is highly recommended if you are in that kind of superstructures and supercolonies building, fascinating animals. It takes time, it´s highly specific, some skimming and scanning won´t be a bad idea, but it´s totally worth it. A wiki walk can be as refreshing to the mind as a walk through nature in this completely overrated real life outside books: 0-biology 0-insects hölldobler-bert Steven Peck Author 28 books 570 followers September 14, 2008 If you only read one book on ants this year. Make it this one. Kelly 14 reviews 3 followers January 31, 2008 My ex-boyfriend stole this book for me. I would stay up late at night and look at the pictures and try to learn all the funny words. To say the least, his book became my bible. Recently I opened it and noticed the dedication: "To the next generation of myrmecologists." This still makes me cry. Meredith 50 reviews 1 follower March 10, 2009 The photographs clearly represent a labor of love for the entomologists who collaborated on this book. You will fall in love too. Who knew ants could be so fascinating? Tim 19 reviews 4 followers June 1, 2007 You all made fun of me for buying and reading this. But look who knows more about ants than you do now! Brian 370 reviews Want to read January 8, 2012 Everything you ever wanted to know about ants... For instance, how do ants find their way around? ....they wanted to see if ants used the sky to find their way around, so they took pictures of the sky, blew them up into big portraits, put them above ant nests so that they blocked out the real sky, but oriented them slightly differently...and kept track of how the ants got all mixed up. Ever wonder what an ant nest looks like inside? ....they found one, and poured quick dry cement into it. Then let it dry, washed away all the dirt, and were left with a sculpture in cement of the inside of a nest. (Everything you ever wanted to know about ants...) to-admire-and-caress Terragyrl3 408 reviews 5 followers October 29, 2023 This book is a summation of All Ant Research Ever. It took me 18 months and 4 library loans to read this Pulitzer winner, but it was so very worth it. Wilson explains every aspect of ant biology and social structure. What I learned applies to humans as well, sometimes depressingly so: among other things, how castes are created through inflicting micro-aggressions and withholding resources—just like people do (!) The most fascinating species to me are the ants who tend fungi gardens inside the colony. Just be aware that this is a textbook, way more academic than Wilson’s more general nature writing. You must be willing to sift through many tables and a few chemical models, but you will be enchanted by the complexities in this tiny world. pulitzer-non-fiction Sanna Karvonen 6 reviews 2 followers December 21, 2009 This is one of the most amazing books I have read. I am intimidated by ants and thus have a huge hunger for information about them - and this book delivers! Fantastic photography and the most high-level expertise imaginable. An absolute coffee-table topper. I just wish I could afford a copy. Darin Stevenson 11 reviews 4 followers December 10, 2013 This book is an astonishing reference and a monumental achievement. Anyone interested in social insects simply must have this, and The Insect Societies. I have been reading and re-reading it for 10 years. It is brilliant and peerless. Kris 193 reviews 3 followers August 17, 2022 I couldn’t actually finish this book. It’s just so detailed! It really approaches encyclopedia level. But I have to give it 5 stars because it’s clearly such a labor of love and a monument to decades of research, and the information is laid out clearly, and concisely believe it or not. It just happens that there are a lot of things to say about ants. Now I can’t look at one without seeing a whole world within its tiny body. Marie Hviding 446 reviews 4 followers March 13, 2016 This is an exhaustive look at ants. It is impressive in its comprehensiveness and its obvious enthusiasm for its subject. However, for the everyday reader hoping to learn about ants, it is a bit too much. I would have been much happier with something a bit less exhaustive and I probably would have retained a bit more information from something that wasn't striving to be so detailed. It's impressive but a bit too much. 2012 science Jen Author 8 books 8 followers March 3, 2009 I think this book will permanently live on my coffee table, because it is way too big to read straight through, but perfect to peruse on occasion. Fun (and physically large enough) to share with a friend. Jenny 118 reviews 4 followers Read September 5, 2011 OK so this book is physically huge and is an index of a TON of discovered ants. I'm giving it a five star rating even tho I know I'll never read it cover to cover. Just to have it and be able to refer to it is enuf for me! ants Keith 62 reviews 3 followers January 15, 2010 There is a lot of information about ants in this book. Maybe more than I require. Jaime 38 reviews 1 follower May 4, 2012 One of my Desert Island Books. I'd spend all my time becoming a self-taught myrmecologist! favorites non-fiction Ant Dude 2 reviews June 20, 2017 My parents got me huge heavy book when I was in junior/middle school! It was all technical to me, but I love ants so I cared not. :) Anja Author 7 books 5 followers December 29, 2021 totally transformative, one of the best science books I have ever read John Moore 3 reviews 1 follower February 2, 2021 This is the best science book ever published !! If you can buy it, do it, I even have two copies !!! Michael Huynh 12 reviews November 12, 2021 This is a book that I've always held in my room because I've always had a very large interest in ants. This book was co-authored by Edward Wilson, who is somewhat considered one of the fathers of Myrmecology or the study of ants. This book is a book I would highly recommend to people that like sciences or entomology. When people look at ants nowadays, they see all ants as the same. They say that there are black ants, and brown ants, and fire ants, but it's very different than that. This book introduces concepts of superorganisms, which is a concept that is generally used in the myrmecological community. It is said that ants aren't individual creatures and that they are a part of a larger "super brain." This book introduces concepts and species like Atta Cephalotes (the Leafcutter Ant) and the Army Ant, which are two species that are of the more developed ants. It's introduced that ants in the species Atta and Acromyrmex cut leaves off of trees and bring them back to their own colonies for their fungus. Their fungus consumes these leaves and the ants eat the fungus. The ants take care of the fungus and tend for it as we tend for our agriculture. Army ants spread the forest floor not staying in a specific spot and always on the move. They build a nest that lasts a few days and then leave to go somewhere else. They devour everything in their path, sometimes taking down small dogs and cats. This book shows us how ants aren't just basic creatures we don't like and that ants aren't just insects that steal from our picnics. They are much more. This book is definitely recommended by me to everyone. Alexei 40 reviews 4 followers January 8, 2022 I am not presumptuous enough to review a work by probably the best experts on ants there are (or were - E. O. Wilson has unfortunately passed away since I started reading this book). This is their shortened popular version of the larger professional work. As such I found it completely satisfying: it told me more about ants than I ever suspected I would want to know. If you entertain a similar interest - read it. Armen Author 10 books 6 followers July 13, 2019 Making ants the most interesting insects on the planet and second most interesting species. library-borrowed JerryACoyne 3 reviews January 24, 2021 Best science book ever. Klimcicle 2 reviews January 25, 2021 The greatest biology masterpiece since Darwin's work. Brendan Ryan 2 reviews January 30, 2021 Awesome!! Filip Kaas 22 reviews 4 followers July 25, 2021 Amazing and ... never finishable evolution Annie Rosenstein 54 reviews April 4, 2023 What a monster book. Wow though I love myrmecology and I’m happy I stuck with this one. February 23, 2013 I'm looking at an incredible sleek, well designed creature, with smooth surfaces and tiny features that put a Ferrari to shame. It's got little grooves to tuck in antennas when fighting. Tiny chemical factories that produce everything from poison to identifying scents to all that are needed for reproductive and digestive processes. And their society is organized for food gathering and processing. They link together to move heavy objects. They design complex structures. Explore their world and form organized fighting armies, while communicating silently. Each page of this book left me in awe, but somehow the authors were immune. Instead presenting a book of half dry facts, and the other half an exercise to explain how all this could be explained by Darwinian theory. The authors accept so much at face value either because it's not their field, or they are blind to molecular complexity of ordinary processes. For example, how can you write about all the different chemicals emitted by these tiny creatures without wondering how these tiny organs know how to produce chemicals that you or I would have difficulty creating in a fully equipped lab? I know a science book shouldn't just scream about how amazing things are, but a touch of humility and wonder would be nice. January 19, 2016 The comprehensive treatment of one of the most interesting groups of organisms on the planet. Ants farm fungus, shepherd aphids, construct underground networks and arboreal empires, harvest seeds, enslave and parasitise each other, travel as nomads or fortify defenses with their own bodies. I love learning about ants, and this book provided me with a resource that will keep on giving. Having read it, I immediately wanted to dive back in to its fascinating storehouses of detail about the "little things that run the world". One of my favourite images from this book is that of certain ant species bringing in their aphid charges during the winter in temperate regions, then returning them to their host plants in spring. Wonderful, beautiful, awesome. One of my favourite books of all time. December 12, 2014 I had read Wilson's and Holldobler's popular account of Ant Societies: Journey to the Ants and was fascinated. I saw The Ants in a bookstore and was amazed by the book, its breadth of subject, physical layout, and photography. The book is intended for specialist, but is readable by anyone with a basic background in natural history. I have read about half the book over ten years, but I still return to yearly when I want to enjoy excellent, engaging scientific writing about a marvelous animal. It is a beautiful book with inexhaustible content. Jeptha Davenport April 13, 2008 This is an encyclopedic survey of the topic, with extensive and beautiful microphotography, field notes, diagrams and references. The perfect way to learn about ants if you don't have time to become a myrmecologist. Displaying 1 - 30 of 35 reviews More reviews and ratings Join the discussion a discussion a question Can't find what you're looking for? Get help and learn more about the design. Help center
189366
https://education.nationalgeographic.org/resource/barometer/
Saved by 92 educators ARTICLE Barometer A barometer is a tool used to measure atmospheric pressure, also called barometric pressure. Grades 9 - 12+ Subjects Earth Science, Meteorology ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ ‌ Loading ... A barometer is a scientific instrument used to measure atmospheric pressure, also called barometric pressure. The atmosphere is the layers of air wrapped around Earth. That air has a weight and presses against everything it touches as gravity pulls it to Earth. Barometers measure this pressure. Atmospheric pressure is an indicator of weather. Changes in the atmosphere, including changes in air pressure, affect the weather. Meteorologists use barometers to predict short-term changes in the weather. A rapid drop in atmospheric pressure means that a low-pressure system is arriving. Low pressure usually means it will be cloudy, rainy, or windy. Air moves away from areas of high pressure. High-pressure areas usually create cool, dry air and clear skies. A barometer measures atmospheric pressure in units of measurement called atmospheres or bars. An atmosphere (atm) is a unit of measurement equal to the average air pressure at sea level at a temperature of 15 degrees Celsius (59 degrees Fahrenheit). The number of atmospheres drops as altitude increases because the density of air is lower and exerts less pressure. As altitude decreases, the density of air increases, as does the number of atmospheres. Barometers have to be adjusted for changes in altitude in order to make accurate atmospheric pressure readings. Types of Barometers Mercury Barometer The mercury barometer is the oldest type of barometer, invented by the Italian physicist Evangelista Torricelli in 1643. Torricelli conducted his first barometric experiments using a tube of water. Water is relatively light in weight, so a very tall tube with a large amount of water had to be used in order to compensate for the heavier weight of atmospheric pressure. Torricelli’s water barometer was more than 10 meters (35 feet) in height, which rose above the roof of his home! This odd device caused suspicion among Torricelli’s neighbors, who thought he was involved in witchcraft. In order to keep his experiments more secretive, Torricelli deduced that he could create a much smaller barometer using mercury, a silvery liquid that weighs 14 times as much as water. A mercury barometer has a glass tube that is closed at the top and open at the bottom. At the bottom of the tube is a pool of mercury. The mercury sits in a circular, shallow dish surrounding the tube. The mercury in the tube will adjust itself to match the atmospheric pressure above the dish. As the pressure increases, it forces the mercury up the tube. The tube is marked with a series of measurements that track the number of atmospheres or bars. Observers can tell what the air pressure is by looking at where the mercury stops in the barometer. Aneroid Barometer In 1844, the French scientist Lucien Vidi invented the aneroid barometer. An aneroid barometer has a sealed metal chamber that expands and contracts, depending on the atmospheric pressure around it. Mechanical tools measure how much the chamber expands or contracts. These measurements are aligned with atmospheres or bars. The aneroid barometer has a circular display that indicates the present number of atmospheres, much like a clock. One hand moves clockwise or counterclockwise to point to the current number of atmospheres. The terms stormy, rain, change, fair, and dry are often written above the numbers on the dial face to make it easier for people to interpret the weather. Aneroid barometers slowly replaced mercury barometers because they were easier to use, cheaper to buy, and easier to transport since they had no liquid that could spill. Some aneroid barometers use a mechanical tool to track the changes in atmospheric pressure over a period of time. These aneroid barometers are called barographs. Barographs are barometers connected to needles that make marks on a roll of adjacent graph paper. The barograph records the number of atmospheres on the vertical axis and units of time on the horizontal. A barograph’s tracking tool will rotate, usually once every day, week, or month. The spikes in the graph show when air pressure was high or low, and how long those pressure systems lasted. A severe storm, for instance, would appear as a deep, wide dip on a barograph. Digital Barometers Today’s digital barometers measure and display complex atmospheric data more accurately and quickly than ever before. Many digital barometers display both current barometric readings and previous one-, three-, six-, and 12-hour readings in a bar chart format, much like a barograph. They also account for other atmospheric readings such as wind and humidity to make accurate weather forecasts. This data is archived and stored on the barometer and can also be downloaded onto a computer for further analysis. Digital barometers are used by meteorologists and other scientists who want up-to-date atmospheric readings when conducting experiments in the lab or out in the field. The digital barometer is now an important tool in many of today’s smartphones. This type of digital barometer uses atmospheric pressure data to make accurate elevation readings. These readings help the smartphone’s GPS receiver pinpoint a location more accurately, greatly improving navigation. Developers and researchers are also using the smartphone’s crowdsourcing capabilities to make more accurate weather forecasts. Apps like PressureNet automatically collect barometric measurements from each of its users, creating a vast network of atmospheric data. This data network makes it easier and faster to map out storms as they develop, especially in areas with few weather stations. Fast Fact Storm Glass A storm glass is a type of barometer used centuries ago. A storm glass is a sealed glass container with an open spout, partly filled with colored water. If the water level in the spout rises above the water level in the container, observers expect low pressure and stormy weather. adjective next to. adjust verb to change or modify something to fit with something else. air noun layer of gases surrounding Earth. air pressure noun force pressed on an object by air or atmosphere. align verb to put in a straight line. altitude noun the distance above sea level. analysis noun process of studying a problem or situation, identifying its characteristics and how they are related. aneroid barometer noun tool that determines atmospheric pressure by measuring how much a metal chamber expands or contracts. app noun (application) specialized program downloaded onto a mobile device. archive verb to keep records or documents. associate verb to connect. atmosphere noun layers of gases surrounding a planet or other celestial body. atmosphere (atm) noun (atm) unit of measurement equal to air pressure at sea level, about 14.7 pounds per square inch. Also called standard atmospheric pressure. atmospheric pressure noun force per unit area exerted by the mass of the atmosphere as gravity pulls it to Earth. axis noun an invisible line around which an object spins. bar noun (b) unit of measurement for pressure; 1 bar is about equal to the atmospheric pressure at sea level. barograph noun barometer that tracks changes in atmospheric pressure over time. barometer noun an instrument that measures atmospheric pressure. barometric pressure noun atmospheric pressure as read by a barometer. chamber noun sealed compartment. cloud noun visible mass of tiny water droplets or ice crystals in Earth's atmosphere. compensate verb to make up for a loss or injury, usually in money, goods, or services. complex adjective complicated. conduct verb to transmit, transport, or carry. contract verb to shrink or get smaller. crowdsourcing noun technique that enlists the public to assist with a specialized task. data plural noun (singular: datum) information collected during a scientific study. decrease verb to lower. deduce verb to reach a conclusion based on clues or evidence. density noun number of things of one kind in a given area. digital adjective having to do with numbers (or digits), often in a format used by computers. display verb to show or reveal. elevation noun height above or below sea level. Evangelista Torricelli noun (1608-1647) Italian physicist. exert verb to force or pressure. expand verb to grow or get larger. forecast verb to predict, especially the weather. GPS receiver noun device that gets radio signals from satellites in orbit above Earth in order to calculate a precise location. graph paper noun paper marked with small boxes, or intersecting horizontal and vertical lines. gravity noun physical force by which objects attract, or pull toward, each other. horizontal adjective left-right direction or parallel to the Earth and the horizon. humidity noun amount of water vapor in the air. indicate verb to display or show. instrument noun tool. interpret verb to explain or understand the meaning of something. invent verb to create. low-pressure system noun weather pattern characterized by low air pressure, usually as a result of warming. Low-pressure systems are often associated with storms. measurement noun process of determining length, width, mass (weight), volume, distance or some other quality or size. mercury noun chemical element with the symbol Hg. mercury barometer noun tool that determines atmospheric pressure by measuring how much mercury moves in a glass tube. metal noun category of elements that are usually solid and shiny at room temperature. meteorologist noun person who studies patterns and changes in Earth's atmosphere. navigation noun art and science of determining an object's position, course, and distance traveled. network noun series of links along which movement or communication can take place. observer noun someone who watches, or observes. physicist noun person who studies the relationship between matter, energy, motion, and force. predict verb to know the outcome of a situation in advance. pressure noun force pressed on an object by another object or condition, such as gravity. previous adjective earlier, or the one before. rain noun liquid precipitation. rapid adjective very fast. rotate verb to turn around a center point or axis. sea level noun base level for measuring elevations. Sea level is determined by measurements taken over a 19-year cycle. smartphone noun mobile telephone with additional features, such as a web browser or music playing device. storm noun severe weather indicating a disturbed state of the atmosphere resulting from uplifted air. storm glass noun glass container filled with water or another liquid that responds to changes in atmospheric pressure. suspicion noun doubt or mistrust. temperature noun degree of hotness or coldness measured by a thermometer with a numerical scale. transport verb to move material from one place to another. vast adjective huge and spread out. vertical noun up-down direction, or at a right angle to Earth and the horizon. weather noun state of the atmosphere, including temperature, atmospheric pressure, wind, humidity, precipitation, and cloudiness. weather station noun area with tools and equipment for measuring changes in the atmosphere. noun movement of air (from a high pressure zone to a low pressure zone) caused by the uneven heating of the Earth by the sun. noun changing of everyday events using supernatural or magical powers. Media Credits The audio, illustrations, photos, and videos are credited beneath the media asset, except for promotional images, which generally link to another page that contains the media credit. The Rights Holder for media is the person or group credited. Writer Andrew Turgeon Editor National Geographic Society Producer National Geographic Society other Last Updated October 24, 2023 For information on user permissions, please read our Terms of Service. If you have questions about how to cite anything on our website in your project or classroom presentation, please contact your teacher. They will best know the preferred format. When you reach out to them, you will need the page title, URL, and the date you accessed the resource. Media If a media asset is downloadable, a download button appears in the corner of the media viewer. If no button appears, you cannot download or save the media. Text Text on this page is printable and can be used according to our Terms of Service. Interactives Any interactives on this page can only be played while you are visiting our website. You cannot download interactives. Related Resources
189367
https://ejmaa.journals.ekb.eg/article_312859_a905ad4230bd2fe9b110b987e025cc20.pdf
Electronic Journal of Mathematical Analysis and Applications Vol. 8(2) July 2020, pp. 291-296 ISSN: 2090-729X(online) ———————————————————————————————— NEW INEQUALITIES FOR THE FUNCTION y = t ln t M. KOSTI´ C Abstract. The main aim of this note, which can be viewed as a certain ad-dendum to the paper , is to propose several new inequalities for the function y = t ln t. We consider the local behaviour of this function near the point t = 1, as well as the global behaviour of this function on the intervals [1, ∞) and (0, 1]. 1. Introduction The reading of paper by C. Chesneau and Y. J. Bagul has strongly infuenced us to write this note. In Theorem 2, we give new abstract local bounds for the function y = t ln t near the point t = 1. The obtained inequalities can be used to improve the main results of paper , Proposition 1 and Proposition 2. We also present an interesting result with regards to these propositions, which claims that there is no rational real function which intermediates the functions ln(1 + x) and f(x)/√x + 1 for x ≥0 (x ∈(−1, 0]); here and hereafter, f(x) := π + 1 2(4 + π)x −2(x + 2) arctan √ x + 1, x ≥−1. The following inequalities are well known (see also [3, Problem 3.6.19, p. 274] and ): ln(1 + x) ≤ x √x + 1, x ≥0, ln(1 + x) ≤x(2 + x) 2(1 + x) , x ≥0, (1) ln(1 + x) ≤x(6 + x) 2(3 + 2x), x ≥0 and ln(1 + x) ≤(x + 2) (x + 1)3 −1 3(1 + x) (x + 1)2 + 1 , x ≥0. (2) Taken together, the first inequality in (1) and the second inequality in (2) are known in the existing literature as Karamata’s inequality . As clarified in , all these inequalities are weaker than the inequality: ln(1 + x) ≤ f(x) √x + 1, x ≥0. (3) 2010 Mathematics Subject Classification. 26D20, 26D07, 33B30. Key words and phrases. Logarithmic inequalities, differential calculus. Submitted Nov. 30, 2019. 291 292 M. KOSTI´ C EJMAA-2020/8(2) This inequality has been proved in [1, Proposition 1]. In [1, Proposition 2], the authors have proved that ln(1 + x) ≥ f(x) √x + 1, x ∈(−1, 0], (4) as well. Our approach leans heavily on the use of substitution t = √x + 1. Then the inequalities (3) and (4) become 2 ln t ≤f t2 −1  t , t ≥1 and 2 ln t ≥f t2 −1  t , t ∈(0, 1], i.e., 2t ln t ≤f t2 −1  , t ≥1 and 2t ln t ≥f t2 −1  , t ∈(0, 1]. (5) We can prove (5) in the following way. Notice that " ln t − 1 2(4 + π)t −2t arctan t −2 !#′′ (t) = − t2 −1 2t−2t2 + 1 −2, t > 0. Using an elementary argumentation, this estimate implies ln t ≤1 2(4 + π)t −2t arctan t −2, t > 0. Define R(t) := 2t ln t−f(t2−1), t > 0. Since R′(t) = 2(1+ln t)−(4+π)t+4t arctan t+ 2, t > 0, the previous inequality yields R′(t) ≤0, t > 0 and (5). Moreover, by taking the limit of function R(·) as t →0+, we get that 2t ln t −f(t2 −1) ∈(2 −(π/2), 0] for t ∈(0, 1]. In this paper, we will first generalize the inequalities in (5) by considering the local behaviour of the function y = t ln t near the point t = 1. We will use the following simple lemmae, which is known from the elementary courses of mathe-matical analysis: Lemma 1 Suppose t0 ∈R, a > 0, n ∈N and function f : (t0 −a, t0 + a) →R is 2n-times differentiable. If f (i)(t0) = 0 for all i = 1, · · ·, 2n −1 and f (2n)(t0) > 0 (f (2n)(t0) < 0), then the function y = f(t) has a local minimum (maximum) at t = t0. Lemma 2 We have (arctan x)(n) = (−1)n−1(n −1)! (1 + x2)n/2 sin(nπ/2 −n arctan x), x ∈R, n ∈N. After that, we will prove the following result with regards to [1, Proposition 1, Proposition 2]: Theorem 1 (i) There do not exist real polynomials P(·) and Q(·) such that Q(x) ̸= 0 for x ≥0 and ln(1 + x) ≤P(x) Q(x) ≤ f(x) √x + 1, x ≥0. (6) (ii) There do not exist real polynomials P(·) and Q(·) such that Q(x) ̸= 0 for x ∈(−1, 0] and ln(1 + x) ≥P(x) Q(x) ≥ f(x) √x + 1, x ∈(−1, 0]. (7) EJMAA-2020/8(2) NEW INEQUALITIES FOR THE FUNCTION y = t ln t 293 2. The main results and their proofs We start this section by stating the following result: Theorem 2 Suppose that a ∈(0, 1), P : (1 −a, 1 + a) →R is a function and P(1) = 0. Then the following holds: (i) If P ′(1) ≥2 and there exists an odd natural number n such that P(·) is (n + 2)-times differentiable, P (n+2)(1) + 2(−1)n+1n! > 0 and P (j)(1) + 2(−1)j+1(j −2)! = 0 for all j = 2, 3, · · ·, n + 1, then there exists a real number ζ ∈(0, a] such that 2t ln t ≤P(t), t ∈[1, 1 + ζ] and 2t ln t ≥P(t), t ∈[1 −ζ, 1]. (8) (ii) Assume that there exists an even natural number n ≥6 such that P(·) is (n + 1)-times differentiable, P (n+1)(1) + 2(−1)n(n −1)! > 0 and P (j)(1) + 2(−1)j+1(j −2)! = 0 for all j = 1, 2, · · ·, n. Then there exists a real number η ∈(0, a] such that 2t ln t ≤P(t) ≤f t2 −1  , t ∈[1, 1 + η] and 2t ln t ≥P(t) ≥f t2 −1  , t ∈[1 −η, 1]. (9) (iii) Assume that there exists an even natural number n ≥6 such that P(·) is (n + 1)-times differentiable, P (j)(1) + 2(−1)j+1(j −2)! = 0 for all j = 1, 2, 3, 4, (10) P (n+1)(1) + 4 " (−1)nn! 2(n+1)/2 sin((n + 1)π/4) + (−1)n+1n! 2n/2 sin(nπ/4) # < 0 and, for every j = 5, 6, · · ·, n, P (j)(1) + 4 " (−1)j−1(j −1)! 2j/2 sin(jπ/4) + (−1)j(j −1)! 2(j−1)/2 sin((j −1)π/4) # = 0. (11) Then there exists a real number η ∈(0, a] such that (9) holds. (iv) If P(·) is five times differentiable, (10) holds and P (v)(1) ∈(−12, −8), then there exists a real number η ∈(0, a] such that (9) holds. Proof. Define G(t) := P(t) −2t ln t, t > 0. Then, for every real number t > 0, we have G′(t) = P ′(t) −2(1 + ln t), G′′(t) = P ′′(t) −(2/t) and G(n)(t) = P (n)(t) + 2(−1)n+1(n −2)! · t1−n, n ≥3. The assumptions made in (i) imply that G′(1) ≥0, (G′)(j)(1) = 0 for 1 ≤j ≤n and (G′)(n+1)(1) > 0. Applying Lemma 1, we get that the function t 7→G′(t) has a local minimum at t = 1. Since G′(1) ≥0, we get that the function t 7→G′(t) is non-negative in an open neighborhood of point t = 1, so that the mapping t 7→G(t) is increasing in an open neighborhood of point t = 1. This finishes the proof of (i). For the proof of (ii), define Q(t) := P(t) −f(t2 −1), t > 0. Then a simple computation yields that, for every real number t > 0, we have Q′(t) = P ′(t)−(4+π)t+4t arctan t+2 and Q′′(t) = P ′′(t)−(4+π)+4 arctan t+ 4t t2+1. 294 M. KOSTI´ C EJMAA-2020/8(2) Using Leibniz rule and Lemma 2, for every real number t > 0 and for every natural number n ≥3, we can show that Q(n)(t) = P (n)(t) + 4 · arctan · (n−1)(t) = P (n)(t) + 4 " t(−1)n−1(n −1)! (1 + t2)n/2 sin(nπ/2 −n arctan t) + (−1)n(n −1)! (1 + t2)(n−1)/2 sin((n −1)π/2 −(n −1) arctan t) # . Arguing as in the proof of (i), we have that (Q′)(j)(1) = 0 for j = 0, 1, 2, 3 and (Q′)(4)(1) < 0; hence, the function t 7→Q′(t) has a local maximum at t = 1 and the mapping t 7→Q(t) is decreasing in an open neighborhood of point t = 1. Similarly, (G′)(j)(1) = 0 for j = 0, 1, 2, · · ·, n −1 and (G′)(n)(1) > 0; hence, the function t 7→G′(t) has a local minimum at t = 1 and the mapping t 7→G(t) is increasing in an open neighborhood of point t = 1. This completes the proof of (ii). The proof of (iii) can be deduced similarly, by interchanging the roles of G(t) and Q(t). If the assumptions of (iv) holds, then we can apply Lemma 1, with n = 2, in order to see that the function t 7→G′(t) has a local minimum at t = 1, as well as the func-tion t 7→G′(t) is non-negative in an open neighborhood of point t = 1; hence, the mapping t 7→G(t) is increasing in an open neighborhood of point t = 1. Similarly, we can show that the mapping t 7→Q(t) is decreasing in an open neighborhood of point t = 1. The proof of the theorem is thereby complete. Remark 1 Define H(t) := f(t2 −1), t ∈R. Concerning the conditions used in Theorem 2, it is worth noting that the function H(·) satisfies H(1) = 0, H′(1) = H′′(1) = 2, H′′′(1) = −2, H(iv)(1) = 4 and H(v)(1) = −8. This implies that the values of terms appearing at the right hand sides of (10) and (11) coincide for j = 1, 2, 3, 4 and differ for j = 5 (observe that G(v)(1) = P (v)(1) + 12). Remark 2 The parts (ii)-(iv) of Theorem 2 ensure the existence of a large class of elementary functions for which we can further refine the inequalities in (5) locally around the point t = 1. Compared with the function H(·), the most simplest exam-ple of function which provides a better estimate describing the local behaviour of function y = t ln t around the point t = 1 is given by the function t 7→H(t)−ϵ(t−1)5, t > 0, where ϵ ∈(0, 1/30). Concerning the global behaviour of function y = t ln t, t > 0, it is clear that the inequalities in (5) give some very uninteresting estimates with regards to the as-ymptotic behaviour of function y = t ln t when t →+∞or t →0+; on the other hand, the importance of estimate (5) lies in the fact that it gives some bounds for the behaviour of function y = t ln t on any compact interval [a, b], where 0 < a < 1 < b. It is clear that there exists a large class of infinitely differen-tiable functions P : (0, ∞) →R such that 2t ln t ≤P(t) ≤f t2 −1  , t ≥1 and 2t ln t ≥P(t) ≥f t2 −1  , t ∈(0, 1]. (12) Finding new elementary functions P(·) for which the equation (12) holds is without scope of this paper. We close the paper by giving the proof of Theorem 1: EJMAA-2020/8(2) NEW INEQUALITIES FOR THE FUNCTION y = t ln t 295 Proof of Theorem 1. Suppose that (6) holds for some real polynomials P(·) and Q(·) such that Q(x) ̸= 0 for x ≥0. Without loss of generality, we may assume that Q(x) > 0, x ≥0. Using the substitution t = √x + 1, we get that 2 ln t ≤P(t2 −1) Q(t2 −1) ≤f(t2 −1) t , t ≥1. If P(t) = Pn j=0 ajtj and Q(t) = Pm j=0 bjtj for some non-negative integers m, n and some real numbers aj, bj (anbm ̸= 0; clearly, we cannot have P(x) ≡0), we get t n X j=0 aj t2 −1 j ≤f t2 −1  m X j=0 bj t2 −1 j, t ≥1 (13) and n X j=0 aj t2 −1 j ≥2 ln t m X j=0 bj t2 −1 j, t ≥1. (14) Since f(t2 −1) ∼(2 −(π/2))t2, t →+∞, the estimate (13) implies n ≤m. The positivity of polynomial Q(·) on the non-negative real axis implies bm > 0 so that (14) gives an > 0. Considering the asymptotic behaviour of terms appearing in (14), we get that the inequality n < m cannot be satisfied so that m = n. Dividing the both sides of (14) with t2n and letting t →+∞in the obtained expression, we get that an/2bn ≥+∞, which is a contradiction. This completes the proof of (i). To prove (ii), suppose that the estimates ln(1 + x) ≥P0(x) Q0(x) ≥ f(x) √x + 1, x ∈(−1, 0] hold for some real polynomials P0(·) and Q0(·) such that Q0(x) ̸= 0 for x ∈(−1, 0]. Then (7) holds for some real polynomials P(·) and Q(·) such that Q(x) > 0 for x ∈(−1, 0]. Letting x →−1−in (7), we get that Q(−1) = 0. If P(x) = Pn j=0 ajxj and Q(x) = Pm j=0 bjxj for some non-negative integers m, n and some real numbers aj, bj (anbm ̸= 0; again, we cannot have P(x) ≡0), this implies ln(1 + x) · m X j=0 bjxj ≥ n X j=0 ajxj ≥ f(x) √x + 1 m X j=0 bjxj, x ∈(−1, 0]. (15) Letting x →0−in this expression, we get that a0 = 0 so that n ≥1 and x|P(x). Define P1(x) := P(x)/x and Q1(x) := Q(x)/(x+1). Then P1(x) and Q1(x) are real polynomials, Q1(x) > 0 for x ∈(−1, 0] and after multiplication with x+1 xQ(x) ≤0 the estimate (15) implies x + 1 x ln(1 + x) ≤P1(x) Q1(x) ≤ √ x + 1f(x) x , x ∈(−1, 0). (16) Letting x →−1−in this expression, we get that limx→−1− P1(x) Q1(x) = 0, which implies P1(−1) = 0. Since P1(x) is a non-zero polynomial, we get that x + 1|P1(x). Multiplying the equation (16) with x x+1 ≤0, we get ln(1 + x) ≥P1(x) Q1(x) ≥ f(x) √x + 1, x ∈(−1, 0). 296 M. KOSTI´ C EJMAA-2020/8(2) Letting x →0−, we get ln(1 + x) ≥P1(x) Q1(x) ≥ f(x) √x + 1, x ∈(−1, 0]. Repeating this procedure, we get that for every natural number k we have (x + 1)k|Q(x), which is a contradiction. References C. Chesneau and Y. J. Bagul, New sharp bounds for the logarithmic function, Electronic J. Math. Anal. Appl. 8(1) (2020), 140–145. J. Karamata, Sur quelques probl` emes pos´ es par Ramanujan, J. Indian Math. Soc. (N.S.) 24 (1960), 343–365. D. S. Mitrinovi´ c, Analytic Inequalities, Springer, Berlin, 1970. F. Topsoe, Some bounds for the logarithmic function, In: Cho YJ, Kim JK, Dragomir SS Editors, Inequality theory and applications 4, New York: Nova Science Publishers, 137, 2007. Marko Kosti´ c Faculty of Technical Sciences, University of Novi Sad, Trg D. Obradovi´ ca 6, 21125 Novi Sad, Serbia Email address: marco.s@verat.net
189368
https://www.osti.gov/biblio/5556465
End gas autoignition and knock in a spark ignition engine (Conference) | OSTI.GOV Skip to main content Sign InCreate Account U.S. Department of Energy Office of Scientific and Technical Information Search terms: ⏷ Submit Research ResultsSearch ToolsPublic Access PolicyPID s Services& Dev ToolsAboutFAQsNewsSign InCreate Account OSTI.GOVConference: End gas autoignition and knock in a spark ignition engine End gas autoignition and knock in a spark ignition engine Conference·31 December 1989 OSTI ID:5556465 Konig, G; Sheppard, C G.W. The paper is concerned with end-gas autoignition, subsequent knock severity and magnitude of induced gas velocity. An optically accessed single cylinder two stroke engine was modified to give complete overhead optical access to the disc-shaped combustion chamber. Flame propagation and end-gas autoignition events were recorded using high speed natural light and schlieren photography; local gas motions, prior to and induced by the knock event, were determined using an oil droplet trajectory technique. Cylinder pressure was synchronously recorded at three positions around the cylinder head; one transducer's output being simultaneously displayed on the film. End gas autoignition generally developed from multiple centers. Autoignition was usually, but not invariably, followed by knock. The severity of knock increased as the onset of autoignition occurred closer behind the top dead centre position; knock was characterized by pressure oscillations, carbon formation and high velocity post-knock gas motions. These phenomena were relatively insensitive to mass fraction unburned at the time of autoignition. 🛈 OSTI does not have a digital full text copy available. For more information, please see document availability, search WorldCat, or search Google Scholar. Cite Citation Formats MLA APA Chicago BibTeX Konig, G and Sheppard, C G.W.. "End gas autoignition and knock in a spark ignition engine." , Dec. 1989. 🗎 Copy to clipboard Konig, G, & Sheppard, C G.W. (1989). End gas autoignition and knock in a spark ignition engine. 🗎 Copy to clipboard Konig, G, and Sheppard, C G.W., "End gas autoignition and knock in a spark ignition engine," (1989) 🗎 Copy to clipboard @conference{osti_5556465, author = {Konig, G and Sheppard, C G.W.}, title = {End gas autoignition and knock in a spark ignition engine}, annote = {The paper is concerned with end-gas autoignition, subsequent knock severity and magnitude of induced gas velocity. An optically accessed single cylinder two stroke engine was modified to give complete overhead optical access to the disc-shaped combustion chamber. Flame propagation and end-gas autoignition events were recorded using high speed natural light and schlieren photography; local gas motions, prior to and induced by the knock event, were determined using an oil droplet trajectory technique. Cylinder pressure was synchronously recorded at three positions around the cylinder head; one transducer's output being simultaneously displayed on the film. End gas autoignition generally developed from multiple centers. Autoignition was usually, but not invariably, followed by knock. The severity of knock increased as the onset of autoignition occurred closer behind the top dead centre position; knock was characterized by pressure oscillations, carbon formation and high velocity post-knock gas motions. These phenomena were relatively insensitive to mass fraction unburned at the time of autoignition.}, url = { place = {United States}, organization = {None}, publisher = {Warrendale, PA (USA); Society of Automotive Engineers}, year = {1989}, month = {12}} 🗎 Copy to clipboard Export Endnote RIS CSV/Excel XML JSON Share Facebook Twitter / X LinkedIn Email Save You must Sign In or Create an Account in order to save documents to your library. Print Details Similar Records / Subjects OSTI ID:5556465 Report Number(s):CONF-9010205--Country of Publication:United States Language:English Similar Records Role of exothermic centers on knock initiation and knock damage Conference · 1989 ·OSTI ID:5935689 Konig, G;Maly, R R;Bradley, D;+2 more Experimental study of engine knock Conference · 1983 ·OSTI ID:5627391 Green, R M;Smith, J R Experimental and modeling study of engine knock Conference · 1984 ·OSTI ID:6918701 Smith, J R;Green, R M;Westbrook, C K;+1 more Related Subjects 33 ADVANCED PROPULSION SYSTEMS 330102 -- Internal Combustion Engines-- Diesel CARBON CONTROL ELEMENTS ENGINES FLAME PROPAGATION FUELS GASOLINE HEAT ENGINES IGNITION INTERNAL COMBUSTION ENGINES KNOCK CONTROL LIQUID FUELS NONMETALS PETROLEUM PRODUCTS PHOTOGRAPHY SCHLIEREN METHOD SPARK IGNITION ENGINES Website Policies / Important Links Contact Us Vulnerability Disclosure Program Facebook X / Twitter YouTube
189369
https://pancakecorner.wordpress.com/2024/01/06/tetration/
Check out my “nature color palette”, colors gathered from nature and compiled into this page! → Pancake's Corner A Simple Overview of Tetration and its Inverses Published: Last Modified: What is Tetration? Tetration, or “hyper-4” is the fourth hyperoperation, preceded by addition, multiplication and exponentiation and followed by pentation (repeated tetration), etc. Tetration is essentially repeated exponentiation. A simple example of tetration is the expression 2^2. Another example would be 2^2^2. Instead of writing 2^2^2, which would be read “two to the power of two to the power of two”, One could write it as a tetration, 2^^3 (NOTE: there are three twos in 2^2^2), which would be read “two tetrated three”. Both of these equations equal 16, as 2^2^2 = 16. 2^4 cannot be written as a tetration because the same number has to be in the base AND the exponent. Solving a Tetration 1. Solve like an exponent series The easiest method to solving a tetration is to decompress it into an exponent series. For example, one can decompress 3^^4 (three tetrated four) to a series of exponents: 3^3^3^3. 3^3 = 9, so we can simplify 3^3^3^3 as 3^3^9. 3^9 = 19683, so we can simplify 3^3^9 to 3^19683. 3^19683 ends up being a number with over nine thousand digits. This is the final solution to 3 tetrated 4. As you can see, tetration can result in extremely large numbers. 2. Use the Lambert W-Function Power Series (In progress) In the meantime, information this can be found here. If you have a method, feel free to share it in the comments! The Inverses of Tetration The two inverse functions of exponentiation are the root, (used for finding the base), and the logarithm, (used for finding the exponent). Tetration’s “root inverse” is the super-root, and its version of the logarithm is the super-logarithm (abbreviated “slog” instead of “log”). Similar to exponentiation’s root, among retraction’s “root functions” are square super-root, cube super-root, etc. Square Super-root (ssrt) If you have a number, a, what number tetrated twice equals a? This is what square super-root finds out. Let’s say a=27. What number tetrated twice equals 27? The answer would be three, since 3^3=27. Solving Square Super-roots Using the Lambert W-Function A square super-root can also be calculated using the Lambert W function. If c=z^^n, and n=2, then z=e^W(ln(c)). “W” is the Lambert W-Function (a.k.a the Omega Function) and “ln” is the natural logarithm. In this case, the principle branch of the Lambert W-Function, W0 is used: For 27=z^^2 (which is the same as 27=z^z), z=e^W(ln(27)), which can be simplified to e^W(3.295836866). Using a Lambert W-Function Calculator, one would find that W(3.295836866) equals 1.098612. Finally, e^1.098612 equals 2.999999134, which is very close to three. If you put 3^3 in the calculator, the result will be 27. In other words, ssrt(27)=3. Super-logarithm (slog) The super-logaritm is another inverse of the tetration function. If b^^n = a, slog_b a = n. Some properties of the slog function are: Let’s use the number 27 again. What is the super-logarithm of 27 to the base of 3? Let’s set up the slog function in which a = 27 and b = 3: slog_3 27 According to our definition above, if slog_3 27 = n, 3^^n = 27, 3 tetrated how many times equals 27? Well, 3^3 = 27 so 3 is tetrated twice to equal 27 (there are two ‘twos’). Our final answer is slog_3 27 = 2. Sources Modified: 6.14.2024 Share: Leave a comment! (Guests can comment too!) Cancel reply Δ Categories Subscribe Subscribe to my blog to get notified of the release of new articles on various topics so that you can learn more every day! Type your email… → Recent posts Carbon Capture Utilization and Storage Fanmade/Unofficial Bakery Story Goals Bakery Story: 3 Ways to Spread the Good News in Bakery Story How to Introduce Yourself in Japanese What is the difference between お気に入りの and 一番好きな? Looking at the Paradox of Infinity in Open Sets Related Articles Pancake's Corner “Science is the discovery of patterns in God’s beautiful creation” – myself Website Powered by WordPress.com. Notifications
189370
https://www.khanacademy.org/science/ap-biology/cellular-energetics/photosynthesis/v/breaking-down-photosynthesis-stages
Breaking down photosynthesis stages (video) | Khan Academy Skip to main content If you're seeing this message, it means we're having trouble loading external resources on our website. If you're behind a web filter, please make sure that the domains .kastatic.org and .kasandbox.org are unblocked. We've updated our Terms of Service. Please review it now. Explore Browse By Standards Math: Florida B.E.S.T. NEW Grade 3 math (FL B.E.S.T.) NEW Grade 4 math (FL B.E.S.T.) NEW Grade 5 math (FL B.E.S.T.) NEW Grade 6 math (FL B.E.S.T.) Grade 7 math (FL B.E.S.T.) Grade 8 math (FL B.E.S.T.) Algebra 1 (FL B.E.S.T.) Geometry (FL B.E.S.T.) Algebra 2 (FL B.E.S.T.) Math: Pre-K - 8th grade Pre-K through grade 2 (Khan Kids) Early math review 2nd grade 3rd grade 4th grade 5th grade 6th grade 7th grade 8th grade Basic geometry and measurement See Pre-K - 8th Math Math: Get ready courses Get ready for 3rd grade Get ready for 4th grade Get ready for 5th grade Get ready for 6th grade Get ready for 7th grade Get ready for 8th grade Get ready for Algebra 1 Get ready for Geometry Get ready for Algebra 2 Get ready for Precalculus Get ready for AP® Calculus Get ready for AP® Statistics Math: high school & college Algebra 1 Geometry Algebra 2 Algebra basics Trigonometry Precalculus High school statistics Statistics & probability College algebra AP®︎/College Calculus AB AP®︎/College Calculus BC AP®︎/College Statistics Multivariable calculus Differential equations Linear algebra See all Math Science Middle school biology Middle school Earth and space science Middle school physics High school biology High school chemistry High school physics Hands-on science activities NEW Teacher resources (NGSS) NEW AP®︎/College Biology AP®︎/College Chemistry AP®︎/College Environmental Science AP®︎/College Physics 1 See all Science Test prep Digital SAT NEW Get ready for SAT Prep: Math NEW LSAT MCAT Computing Intro to CS - Python NEW Computer programming AP®︎/College Computer Science Principles Computers and the Internet Pixar in a Box See all Computing Reading & language arts Up to 2nd grade (Khan Kids) 2nd grade 3rd grade 4th grade 5th grade 6th grade reading and vocab NEW 7th grade reading and vocab NEW 8th grade reading and vocab NEW 9th grade reading and vocab NEW 10th grade reading and vocab NEW Grammar See all Reading & Language Arts Economics Macroeconomics AP®︎/College Macroeconomics Microeconomics AP®︎/College Microeconomics See all Economics Life skills Social & emotional learning (Khan Kids) Khanmigo for students NEW AI for education NEW Financial literacy NEW Internet safety Social media literacy Growth mindset College admissions Careers Personal finance See all Life Skills Social studies US history AP®︎/College US History US government and civics AP®︎/College US Government & Politics Constitution 101 NEW World History Project - Origins to the Present World History Project - 1750 to the Present World history AP®︎/College World History NEW Big History Project Climate project NEW Art history AP®︎/College Art History See all Social studies Partner courses Ancient Art NEW Asian Art Biodiversity Music NASA Natural History New Zealand - Natural & cultural history NEW NOVA Labs Philosophy Khan for educators Khan for educators (US) NEW Khanmigo for educators NEW Search AI for Teachers FreeDonateLog inSign up Search for courses, skills, and videos Help us do more We'll get right to the point: we're asking you to help support Khan Academy. We're a nonprofit that relies on support from people like you. If everyone reading this gives $10 monthly, Khan Academy can continue to thrive for years. Please help keep Khan Academy free, for anyone, anywhere forever. Select gift frequency One time Recurring Monthly Yearly Select amount $10 $20 $30 $40 Other Give now By donating, you agree to our terms of service and privacy policy. Skip to lesson content AP®︎/College Biology Course: AP®︎/College Biology > Unit 3 Lesson 4: Photosynthesis Photosynthesis Intro to photosynthesis Breaking down photosynthesis stages Conceptual overview of light dependent reactions The light-dependent reactions The Calvin cycle Photosynthesis evolution Photosynthesis review Photosynthesis Science> AP®︎/College Biology> Cellular energetics> Photosynthesis © 2025 Khan Academy Terms of usePrivacy PolicyCookie NoticeAccessibility Statement Breaking down photosynthesis stages NGSS.HS: HS‑LS1‑5, HS‑LS1.C.1, HS‑LS2‑3, HS‑LS2.B.1 Google Classroom Microsoft Teams 0 energy points AboutAbout this videoTranscript Photosynthesis, a process vital for life, involves two main stages: light-dependent reactions and the light-independent reactions (also called the Calvin cycle). Light-dependent reactions use light energy and water to produce ATP, NADPH, and oxygen. Light-independent reactions then uses ATP, NADPH, and carbon dioxide to create sugar. This process transforms light energy into a usable form, supporting life on Earth. Created by Sal Khan. Skip to end of discussions QuestionsTips & Thanks Want to join the conversation? Log in Sort by: Top Voted Amogh Agrawal 9 years ago Posted 9 years ago. Direct link to Amogh Agrawal's post “why can't plants directly...” more why can't plants directly transfer ATPs from one part to another. why do they need to form carbohydrates which have to be broken down by cellular respiration AnswerButton navigates to signup page•CommentButton navigates to signup page (13 votes) Upvote Button navigates to signup page Downvote Button navigates to signup page Flag Button navigates to signup page more Answer Show previewShow formatting options Post answer Jay 8 years ago Posted 8 years ago. Direct link to Jay's post “ATP is a large molecule a...” more ATP is a large molecule and can be hard to transport, it is also very unstable. Carbohydrates are much more stable which makes them a lot easier to move around. Imagine that the phosphates of the ATP are three golf balls stacked on top of each other, and then imagine that the Carbohydrates are blocks that have been glued together. Which do you think would be easier to move? CommentButton navigates to signup page (35 votes) Upvote Button navigates to signup page Downvote Button navigates to signup page Flag Button navigates to signup page more Cindy 7 years ago Posted 7 years ago. Direct link to Cindy's post “Is glucose the only sugar...” more Is glucose the only sugar that can possibly be produced at the end of photosynthesis? AnswerButton navigates to signup page•CommentButton navigates to signup page (12 votes) Upvote Button navigates to signup page Downvote Button navigates to signup page Flag Button navigates to signup page more Answer Show previewShow formatting options Post answer Jen 7 years ago Posted 7 years ago. Direct link to Jen's post “The glucose produced can ...” more The glucose produced can be converted into other sugars like fructose or sucrose but the immediate sugar product of photosynthesis is glucose! CommentButton navigates to signup page (21 votes) Upvote Button navigates to signup page Downvote Button navigates to signup page Flag Button navigates to signup page more Show more... Angel Xu 4 years ago Posted 4 years ago. Direct link to Angel Xu's post “What's NADPH? Is it ATP o...” more What's NADPH? Is it ATP or something? AnswerButton navigates to signup page•CommentButton navigates to signup page (8 votes) Upvote Button navigates to signup page Downvote Button navigates to signup page Flag Button navigates to signup page more Answer Show previewShow formatting options Post answer Bonita 4 years ago Posted 4 years ago. Direct link to Bonita's post “Nicotinamide adenine dinu...” more Nicotinamide adenine dinucleotide phosphate abbreviated NADP+, or in older notation ,TPN, is a cofactor used in anabolic reactions ,such as the Calvin cycle and lipid and nucleic syntheses which require NADPH as a reducing agent. It is used by all forms in cellular life.NADPH is the reduced form of NADP+. Hopefully that helped 👍. 2 commentsComment on Bonita's post “Nicotinamide adenine dinu...” (12 votes) Upvote Button navigates to signup page Downvote Button navigates to signup page Flag Button navigates to signup page more K 9 years ago Posted 9 years ago. Direct link to K's post “where do the NAD+ and ADP...” more where do the NAD+ and ADP (that are then converted to NADPH & ATP in the light dependent stage) come from in the process? AnswerButton navigates to signup page•CommentButton navigates to signup page (7 votes) Upvote Button navigates to signup page Downvote Button navigates to signup page Flag Button navigates to signup page more Answer Show previewShow formatting options Post answer Jay 8 years ago Posted 8 years ago. Direct link to Jay's post “Sal will most likely talk...” more Sal will most likely talk about this if we just keep watching. CommentButton navigates to signup page (8 votes) Upvote Button navigates to signup page Downvote Button navigates to signup page Flag Button navigates to signup page more Show more... Boboa 4 years ago Posted 4 years ago. Direct link to Boboa's post “What is Nadph? Like what ...” more What is Nadph? Like what does it do? AnswerButton navigates to signup page•CommentButton navigates to signup page (6 votes) Upvote Button navigates to signup page Downvote Button navigates to signup page Flag Button navigates to signup page more Answer Show previewShow formatting options Post answer [name] 4 years ago Posted 4 years ago. Direct link to [name]'s post “NADPH is produced by the ...” more NADPH is produced by the first stage of synthesis. It carries a high-energy electron that received its energy when it was hit by a beam of light during the light reaction of photosynthesis. Then, it carries that electron into the Stroma (the liquid inside of the chloroplast), where the high-energy electron is used to form Glucose and other carbohydrates from CO2. Hope this helps! CommentButton navigates to signup page (8 votes) Upvote Button navigates to signup page Downvote Button navigates to signup page Flag Button navigates to signup page more Rohan 2 years ago Posted 2 years ago. Direct link to Rohan's post “So I can make ATP and NAD...” more So I can make ATP and NADPH if I shine sunlight onto a pot of water? AnswerButton navigates to signup page•CommentButton navigates to signup page (4 votes) Upvote Button navigates to signup page Downvote Button navigates to signup page Flag Button navigates to signup page more Answer Show previewShow formatting options Post answer rahulmaru3507 a year ago Posted a year ago. Direct link to rahulmaru3507's post “No. The reaction is carri...” more No. The reaction is carried out by complex networks of enzymes, and requires more raw material (ADP and NADP) from the plant. What is shown is only the net reaction. CommentButton navigates to signup page (1 vote) Upvote Button navigates to signup page Downvote Button navigates to signup page Flag Button navigates to signup page more Show more... cgut2007 5 years ago Posted 5 years ago. Direct link to cgut2007's post “how does plant cell membr...” more how does plant cell membrane function AnswerButton navigates to signup page•1 commentComment on cgut2007's post “how does plant cell membr...” (3 votes) Upvote Button navigates to signup page Downvote Button navigates to signup page Flag Button navigates to signup page more Answer Show previewShow formatting options Post answer Aadithya 5 years ago Posted 5 years ago. Direct link to Aadithya's post “it allows transport of nu...” more it allows transport of nutrients between the cell and its surroundings. helps to carry out osmosis by having a partially permeable layer as well CommentButton navigates to signup page (3 votes) Upvote Button navigates to signup page Downvote Button navigates to signup page Flag Button navigates to signup page more Show more... Sondos Alqabili 2 years ago Posted 2 years ago. Direct link to Sondos Alqabili's post “what is NADPH someone exp...” more what is NADPH someone explain it to me please. AnswerButton navigates to signup page•1 commentComment on Sondos Alqabili's post “what is NADPH someone exp...” (4 votes) Upvote Button navigates to signup page Downvote Button navigates to signup page Flag Button navigates to signup page more Answer Show previewShow formatting options Post answer Jayanthika 2 years ago Posted 2 years ago. Direct link to Jayanthika's post “NADP (nicotinamide adenin...” more NADP (nicotinamide adenine dinucleotide phosphate) is a coenzyme that carries electrical energy used in cellular processes. CommentButton navigates to signup page (2 votes) Upvote Button navigates to signup page Downvote Button navigates to signup page Flag Button navigates to signup page more 27AROCHA.LOPEZ.. 2 years ago Posted 2 years ago. Direct link to 27AROCHA.LOPEZ..'s post “what is NADPH please” more what is NADPH please AnswerButton navigates to signup page•CommentButton navigates to signup page (3 votes) Upvote Button navigates to signup page Downvote Button navigates to signup page Flag Button navigates to signup page more Answer Show previewShow formatting options Post answer Beau Regan 2 years ago Posted 2 years ago. Direct link to Beau Regan's post “Nicotinamide adenine dinu...” more Nicotinamide adenine dinucleotide phosphate, abbreviated NADP⁺ or, in older notation, TPN, is a cofactor used in anabolic reactions, such as the Calvin cycle and lipid and nucleic acid syntheses, which require NADPH as a reducing agent. NADPH is the reduced form of NADP⁺, the oxidized form. Hope this helps 2 commentsComment on Beau Regan's post “Nicotinamide adenine dinu...” (4 votes) Upvote Button navigates to signup page Downvote Button navigates to signup page Flag Button navigates to signup page more Show more... avxkado 4 years ago Posted 4 years ago. Direct link to avxkado's post “What are NADP+ and NADPH,...” more What are NADP+ and NADPH, and what do they stand for ( 2:57 )? AnswerButton navigates to signup page•CommentButton navigates to signup page (2 votes) Upvote Button navigates to signup page Downvote Button navigates to signup page Flag Button navigates to signup page more Answer Show previewShow formatting options Post answer obiwan kenobi 4 years ago Posted 4 years ago. Direct link to obiwan kenobi's post “NADP+ and NADPH are elect...” more NADP+ and NADPH are electron carriers and NADP stands for nicotinamide adenine dinucleotide phosphate. Since this is quite a mouthful, we just refer to it as NADPH. Hope this helps! CommentButton navigates to signup page (5 votes) Upvote Button navigates to signup page Downvote Button navigates to signup page Flag Button navigates to signup page more Video transcript - [Voiceover] So, I'm gonna give another quick overview of photosynthesis. And this time I'm gonna break it down into two big stages. So, as you are probably familiar, just looking at the word, photosynthesis. It essentially has two parts, it has photo and it has synthesis. The photo is referring to that it's going to use light somehow. And what's it going to do with that light energy? Well, it's going to synthesize something. And, in particular, what it's going to synthesize, as we'll see, is sugar. So, we are going to go from energy in light, let me just write light, light energy, and we're going to use that light energy to synthesize, to synthesize, sugar, very broadly speaking. Obviously this is a very, very high-level overview. The light energy isn't the only input here. We're also going to need some water and as we go into future videos, we'll see what that water's used for. It's actually a source of electrons. To do this, to make use of that light energy, frankly. And we're also going to need some carbon dioxide, really as a source of carbons, because there's a lot of carbon in those sugars. We're essentially going to fix the carbon. We're gonna take it from this carbon dioxide gas, and we're going to incorporate it into organic molecules and eventually into the sugar. And sugar isn't the only output. Another bi-product of this process is molecular oxygen. Once you strip a couple of electrons from the water, and the hydrogen ions are stripped away from it as well, all you're left with is oxygen. And you do that twice, then you have o two and you have molecular oxygen. And this is a bi-product of photosynthesis, but you can imagine this is very important to life on earth as we know it, in particular for us. We would have trouble breathing if this was not a bi-product of photosynthesis. Now what I'm gonna do now is break this out into two stages. And these two stages, we can call the light-dependent reactions. Light-dependent reactions, and then the second stage, I will call the Calvin cycle. Calvin, Calvin, cycle. And as the name implies, the light-dependent reactions are dependent on light. So, what's happening here is, we're gonna take light energy. Light energy. Plus we're gonna take the water as a source of electrons, and we're going to use these two things. We're going to use these two things to produce, to produce, let me write this in another color, to produce ATP from ADP, so we're gonna produce ATP, which is a store of energy, and we're also going to reduce NADP plus into NADPH, which has energy as a strong reducing agent. So this is what is happening, broadly speaking, in the light reactions. And then in the Calvin cycle, what we're gonna do is we're gonna take these products of the light-dependent reactions, so we're gonna take our ATP and our NADPH, and we can use their energy in conjunction with some carbon dioxide, with some carbon dioxide, in order to produce, in order to produce, sugar. In order to produce sugar. And, let me see, have I got everything here? Oh, of course, I'm missing one of the bi-products of the light-dependent reactions. A very important one. I'm missing the molecular, the molecular oxygen. So, once again, this is what makes up photosynthesis, but you can break it up into these two segments. Light-dependent reaction is using the energy from photons in light along with electrons from the water to produce, to store energy, as ATP and NADPH, and has oxygen, molecular oxygen, as a bi-product. In order for it to get one molecular oxygen, you're gonna have to need two of these water molecules. And then, as we go into the Calvin cycle, we can take these, the ATP and the NADPH, along with some carbon dioxide, and we can use that to actually store energy as actual sugar. And as we'll do in future videos, we'll go into more depth and see what exactly happens in these light-dependent reactions, and what exactly happens in the Calvin cycle. Creative Commons Attribution/Non-Commercial/Share-AlikeVideo on YouTube Up next: video Use of cookies Cookies are small files placed on your device that collect information when you use Khan Academy. Strictly necessary cookies are used to make our site work and are required. Other types of cookies are used to improve your experience, to analyze how Khan Academy is used, and to market our service. You can allow or disallow these other cookies by checking or unchecking the boxes below. You can learn more in our cookie policy Accept All Cookies Strictly Necessary Only Cookies Settings Privacy Preference Center When you visit any website, it may store or retrieve information on your browser, mostly in the form of cookies. This information might be about you, your preferences or your device and is mostly used to make the site work as you expect it to. The information does not usually directly identify you, but it can give you a more personalized web experience. Because we respect your right to privacy, you can choose not to allow some types of cookies. Click on the different category headings to find out more and change our default settings. However, blocking some types of cookies may impact your experience of the site and the services we are able to offer. More information Allow All Manage Consent Preferences Strictly Necessary Cookies Always Active Certain cookies and other technologies are essential in order to enable our Service to provide the features you have requested, such as making it possible for you to access our product and information related to your account. For example, each time you log into our Service, a Strictly Necessary Cookie authenticates that it is you logging in and allows you to use the Service without having to re-enter your password when you visit a new page or new unit during your browsing session. Functional Cookies Functional Cookies These cookies provide you with a more tailored experience and allow you to make certain selections on our Service. For example, these cookies store information such as your preferred language and website preferences. Targeting Cookies Targeting Cookies These cookies are used on a limited basis, only on pages directed to adults (teachers, donors, or parents). We use these cookies to inform our own digital marketing and help us connect with people who are interested in our Service and our mission. We do not use cookies to serve third party ads on our Service. Performance Cookies Performance Cookies These cookies and other technologies allow us to understand how you interact with our Service (e.g., how often you use our Service, where you are accessing the Service from and the content that you’re interacting with). Analytic cookies enable us to support and improve how our Service operates. For example, we use Google Analytics cookies to help us measure traffic and usage trends for the Service, and to understand more about the demographics of our users. We also may use web beacons to gauge the effectiveness of certain communications and the effectiveness of our marketing campaigns via HTML emails. Cookie List Clear checkbox label label Apply Cancel Consent Leg.Interest checkbox label label checkbox label label checkbox label label Reject All Confirm My Choices Top Voted
189371
https://pubmed.ncbi.nlm.nih.gov/38806190/
Long-term safety and efficacy of upadacitinib versus adalimumab in patients with rheumatoid arthritis: 5-year data from the phase 3, randomised SELECT-COMPARE study - PubMed Clipboard, Search History, and several other advanced features are temporarily unavailable. Skip to main page content An official website of the United States government Here's how you know The .gov means it’s official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you’re on a federal government site. The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely. Log inShow account info Close Account Logged in as: username Dashboard Publications Account settings Log out Access keysNCBI HomepageMyNCBI HomepageMain ContentMain Navigation Search: Search AdvancedClipboard User Guide Save Email Send to Clipboard My Bibliography Collections Citation manager Display options Display options Format Save citation to file Format: Create file Cancel Email citation Email address has not been verified. Go to My NCBI account settings to confirm your email and then refresh this page. To: Subject: Body: Format: [x] MeSH and other data Send email Cancel Add to Collections Create a new collection Add to an existing collection Name your collection: Name must be less than 100 characters Choose a collection: Unable to load your collection due to an error Please try again Add Cancel Add to My Bibliography My Bibliography Unable to load your delegates due to an error Please try again Add Cancel Your saved search Name of saved search: Search terms: Test search terms Would you like email updates of new search results? Saved Search Alert Radio Buttons Yes No Email: (change) Frequency: Which day? Which day? Report format: Send at most: [x] Send even when there aren't any new results Optional text in email: Save Cancel Create a file for external citation management software Create file Cancel Your RSS Feed Name of RSS Feed: Number of items displayed: Create RSS Cancel RSS Link Copy Full text links HighWire Full text links Actions Cite Collections Add to Collections Create a new collection Add to an existing collection Name your collection: Name must be less than 100 characters Choose a collection: Unable to load your collection due to an error Please try again Add Cancel Permalink Permalink Copy Display options Display options Format Page navigation Title & authors Abstract Conflict of interest statement Figures Similar articles Cited by References Publication types MeSH terms Substances Associated data Related information LinkOut - more resources Clinical Trial RMD Open Actions Search in PubMed Search in NLM Catalog Add to Search . 2024 May 28;10(2):e004007. doi: 10.1136/rmdopen-2023-004007. Long-term safety and efficacy of upadacitinib versus adalimumab in patients with rheumatoid arthritis: 5-year data from the phase 3, randomised SELECT-COMPARE study Roy Fleischmann1,Jerzy Swierkot2,Sara K Penn3,Patrick Durez4,Louis Bessette5,Xianwei Bu3,Nasser Khan3,Yihan Li3,Charles G Peterfy6,Yoshiya Tanaka7,Eduardo Mysler8 Affiliations Expand Affiliations 1 Metroplex Clinical Research Center, University of Texas Southwestern Med Center, Dallas, Texas, USA rfleischmann@arthdocs.com. 2 Department of Rheumatology and Internal Medicine, Wroclaw Medical University, Wroclaw, Poland. 3 Immunology, AbbVie, North Chicago, Illinois, USA. 4 Pôle de Recherche en Rhumatologie, Institut de Recherche Expérimentale et Clinique, UCLouvain Saint-Luc, Brussels, Belgium. 5 Rheumatology, Laval University, Quebec, Quebec, Canada. 6 Spire Sciences Inc, Boca Raton, Florida, USA. 7 The First Department of Internal Medicine, University of Occupational and Environmental Health, Kitakyushu, Japan. 8 Rheumatology, Organización Medica de Investigación, Buenos Aires, Argentina. PMID: 38806190 PMCID: PMC11138271 DOI: 10.1136/rmdopen-2023-004007 Item in Clipboard Clinical Trial Long-term safety and efficacy of upadacitinib versus adalimumab in patients with rheumatoid arthritis: 5-year data from the phase 3, randomised SELECT-COMPARE study Roy Fleischmann et al. RMD Open.2024. Show details Display options Display options Format RMD Open Actions Search in PubMed Search in NLM Catalog Add to Search . 2024 May 28;10(2):e004007. doi: 10.1136/rmdopen-2023-004007. Authors Roy Fleischmann1,Jerzy Swierkot2,Sara K Penn3,Patrick Durez4,Louis Bessette5,Xianwei Bu3,Nasser Khan3,Yihan Li3,Charles G Peterfy6,Yoshiya Tanaka7,Eduardo Mysler8 Affiliations 1 Metroplex Clinical Research Center, University of Texas Southwestern Med Center, Dallas, Texas, USA rfleischmann@arthdocs.com. 2 Department of Rheumatology and Internal Medicine, Wroclaw Medical University, Wroclaw, Poland. 3 Immunology, AbbVie, North Chicago, Illinois, USA. 4 Pôle de Recherche en Rhumatologie, Institut de Recherche Expérimentale et Clinique, UCLouvain Saint-Luc, Brussels, Belgium. 5 Rheumatology, Laval University, Quebec, Quebec, Canada. 6 Spire Sciences Inc, Boca Raton, Florida, USA. 7 The First Department of Internal Medicine, University of Occupational and Environmental Health, Kitakyushu, Japan. 8 Rheumatology, Organización Medica de Investigación, Buenos Aires, Argentina. PMID: 38806190 PMCID: PMC11138271 DOI: 10.1136/rmdopen-2023-004007 Item in Clipboard Full text links Cite Display options Display options Format Abstract Objectives: To assess the safety and efficacy of upadacitinib versus adalimumab from SELECT-COMPARE over 5 years. Methods: Patients with rheumatoid arthritis and inadequate response to methotrexate were randomised to receive upadacitinib 15 mg once daily, placebo or adalimumab 40 mg every other week, all with concomitant methotrexate. By week 26, patients with insufficient response to randomised treatment were rescued; patients remaining on placebo switched to upadacitinib. Patients completing the 48-week double-blind period could enter a long-term extension. Safety and efficacy were assessed through week 264, with radiographic progression analysed through week 192. Safety was assessed by treatment-emergent adverse events (TEAEs). Efficacy was analysed by randomised group (non-responder imputation (NRI)) or treatment sequence (as observed). Results: Rates of TEAEs were generally similar with upadacitinib versus adalimumab, although numerically higher rates of herpes zoster, lymphopenia, creatine phosphokinase elevation, hepatic disorder and non-melanoma skin cancer were reported with upadacitinib. Numerically greater proportions of patients randomised to upadacitinib versus adalimumab achieved clinical responses (NRI); Clinical Disease Activity Index remission (≤2.8) and Disease Activity Score based on C reactive protein <2.6 were achieved by 24.6% vs 18.7% (nominal p=0.042) and 31.8% vs 23.2% (nominal p=0.006), respectively. Radiographic progression was numerically lower with continuous upadacitinib versus adalimumab at week 192. Conclusion: The safety profile of upadacitinib through 5 years was consistent with the known safety profile of upadacitinib, with no new safety risks. Clinical responses were numerically higher with upadacitinib versus adalimumab at 5 years. Upadacitinib demonstrates a favourable benefit-risk profile for long-term rheumatoid arthritis treatment. Trial registration number:NCT02629159. Keywords: Antirheumatic Agents; Arthritis, Rheumatoid; Biological Therapy; Inflammation. © Author(s) (or their employer(s)) 2024. Re-use permitted under CC BY-NC. No commercial re-use. See rights and permissions. Published by BMJ. PubMed Disclaimer Conflict of interest statement Competing interests: RF has received consulting fees and/or grant/research support from AbbVie, Amgen, BI, Biosplice, BMS, Flexion, Galapagos, Galvani, Gilead, GSK, Horizon, Janssen, Lilly, Novartis, Pfizer, Sanofi-Aventis, Selecta, UCB, Viela, Vorso and Vyne and has participated on a data safety monitoring board or advisory board for Kiniksa. JS has received speaking fees, consulting fees and grant/research support from AbbVie, Accord, BMS, Janssen, MSD, Pfizer, Roche, Sandoz and UCB. SKP, XB, NK and YL are employees of AbbVie and may hold stock or options. PD has received speaker fees from AbbVie, Galapagos, Lilly, Nordimed and Thermofischer. LB has received speaking fees, consulting fees and grant/research support from AbbVie, Amgen, BMS, Celgene, Lilly, Fresenius Kabi, Gilead, Janssen, Novartis, Organon, Pfizer, Sanofi-Aventis, Teva and UCB. CGP is an employee and shareholder of Spire Sciences and has served as a consultant for Aclaris, AstraZeneca, Daiichi-Sankyo, Five Prime, Genentech, Gilead, GSK, Istesso, Labcorp, Lilly, Pacira, Paradigm, SetPoint, Sorrento, SynOx and UCB. YT has received speaker fees and/or honoraria from AbbVie, Asahikasei, AstraZeneca, BI, BMS, Chugai, Eisai, Gilead, GSK, Lilly, Pfizer, Taiho and Taisho and research grants from Asahikasei, Chugai, Eisai, Mitsubishi-Tanabe and Taisho. EM has received speaking fees, consulting fees and grant/research support from AbbVie, Amgen, AstraZeneca, BMS, Hi-Bio, Janssen, Lilly, Novartis, Pfizer, Roche, Sandoz and Sanofi, and has received payment for expert testimony from AbbVie. Figures Figure 1 Proportions of patients achieving CDAI… Figure 1 Proportions of patients achieving CDAI LDA/remission and DAS28 (CRP) ≤3.2/<2.6 through… Figure 1 Proportions of patients achieving CDAI LDA/remission and DAS28 (CRP) ≤3.2/<2.6 through 264 weeks (NRI). #p<0.05, ##p<0.01, ###p<0.001 for UPA 15 mg once daily vs ADA 40 mg every other week. All p values are nominal. Treatment groups are by initial randomisation. NRI was used for patients who were rescued or prematurely discontinued study drug, as well as for missing data. Data points plotted here are shown in online supplemental table 8. ADA, adalimumab; CDAI, Clinical Disease Activity Index; DAS28 (CRP), 28-joint Disease Activity Score based on C reactive protein; LDA, low disease activity; NRI, non-responder imputation; UPA, upadacitinib. Figure 2 Proportions of patients achieving CDAI… Figure 2 Proportions of patients achieving CDAI LDA/remission and DAS28 (CRP) ≤3.2/<2.6 through… Figure 2 Proportions of patients achieving CDAI LDA/remission and DAS28 (CRP) ≤3.2/<2.6 through 264 weeks (AO). Groups are by treatment sequence AO, without imputation for missing data. All patients in the PBO group who were not previously rescued were switched to UPA at week 26. Data points plotted here are shown in online supplemental table 9. ADA, adalimumab; AO, as observed; CDAI, Clinical Disease Activity Index; DAS28 (CRP), 28-joint Disease Activity Score based on C reactive protein; LDA, low disease activity; PBO, placebo; UPA, upadacitinib. Figure 3 Proportions of patients achieving ACR20,… Figure 3 Proportions of patients achieving ACR20, ACR50 and ACR70 through 264 weeks (NRI). #p… Figure 3 Proportions of patients achieving ACR20, ACR50 and ACR70 through 264 weeks (NRI). #p<0.05, ##p<0.01, ###p<0.001 for UPA 15 mg once daily vs ADA 40 mg every other week. All p values are nominal. Treatment groups are by initial randomisation. NRI was used for patients who were rescued or prematurely discontinued study drug, as well as for missing data. Data points plotted here are shown in online supplemental table 8. ACR20/50/70, ≥20/50/70% improvement in American College of Rheumatology response criteria; ADA, adalimumab; NRI, non-responder imputation; UPA, upadacitinib. Figure 4 Proportions of patients achieving ACR20,… Figure 4 Proportions of patients achieving ACR20, ACR50 and ACR70 through 264 weeks (AO). Groups… Figure 4 Proportions of patients achieving ACR20, ACR50 and ACR70 through 264 weeks (AO). Groups are by treatment sequence AO, without imputation for missing data. All patients in the PBO group who were not previously rescued were switched to UPA at week 26. Data points plotted here are shown in online supplemental table 9. ACR20/50/70, ≥20/50/70% improvement in American College of Rheumatology response criteria; ADA, adalimumab; AO, as observed; PBO, placebo; UPA, upadacitinib. Figure 5 Radiographic outcomes at 26, 96… Figure 5 Radiographic outcomes at 26, 96 and 192 weeks (AO). Groups are by treatment… Figure 5 Radiographic outcomes at 26, 96 and 192 weeks (AO). Groups are by treatment sequence AO, without imputation for missing data. All patients in the PBO group who were not previously rescued were switched to UPA at week 26. Data points plotted here are shown inonline supplemental tables 9,10. Δ, change from baseline; ADA, adalimumab; AO, as observed; BL, baseline; LS, least squares; PBO, placebo; mTSS, modified Total Sharp Score; UPA, upadacitinib. See this image and copyright information in PMC Similar articles Upadacitinib monotherapy versus methotrexate monotherapy in patients with rheumatoid arthritis: efficacy and safety through 5 years in the SELECT-EARLY randomized controlled trial.van Vollenhoven R, Strand V, Takeuchi T, Chávez N, Walter PM, Singhal A, Swierkot J, Khan N, Bu X, Li Y, Penn SK, Camp HS, Aelion J.van Vollenhoven R, et al.Arthritis Res Ther. 2024 Jul 29;26(1):143. doi: 10.1186/s13075-024-03358-x.Arthritis Res Ther. 2024.PMID: 39075620 Free PMC article.Clinical Trial. Safety and effectiveness of upadacitinib or adalimumab plus methotrexate in patients with rheumatoid arthritis over 48 weeks with switch to alternate therapy in patients with insufficient response.Fleischmann RM, Genovese MC, Enejosa JV, Mysler E, Bessette L, Peterfy C, Durez P, Ostor A, Li Y, Song IH.Fleischmann RM, et al.Ann Rheum Dis. 2019 Nov;78(11):1454-1462. doi: 10.1136/annrheumdis-2019-215764. Epub 2019 Jul 30.Ann Rheum Dis. 2019.PMID: 31362993 Free PMC article.Clinical Trial. Upadacitinib Versus Placebo or Adalimumab in Patients With Rheumatoid Arthritis and an Inadequate Response to Methotrexate: Results of a Phase III, Double-Blind, Randomized Controlled Trial.Fleischmann R, Pangan AL, Song IH, Mysler E, Bessette L, Peterfy C, Durez P, Ostor AJ, Li Y, Zhou Y, Othman AA, Genovese MC.Fleischmann R, et al.Arthritis Rheumatol. 2019 Nov;71(11):1788-1800. doi: 10.1002/art.41032. Epub 2019 Aug 28.Arthritis Rheumatol. 2019.PMID: 31287230 Clinical Trial. Upadacitinib in Rheumatoid Arthritis: A Benefit-Risk Assessment Across a Phase III Program.Conaghan PG, Mysler E, Tanaka Y, Da Silva-Tillmann B, Shaw T, Liu J, Ferguson R, Enejosa JV, Cohen S, Nash P, Rigby W, Burmester G.Conaghan PG, et al.Drug Saf. 2021 May;44(5):515-530. doi: 10.1007/s40264-020-01036-w. Epub 2021 Feb 2.Drug Saf. 2021.PMID: 33527177 Free PMC article.Review. Upadacitinib for the treatment of rheumatoid arthritis.Serhal L, Edwards CJ.Serhal L, et al.Expert Rev Clin Immunol. 2019 Jan;15(1):13-25. doi: 10.1080/1744666X.2019.1544892. Epub 2018 Nov 19.Expert Rev Clin Immunol. 2019.PMID: 30394138 Review. See all similar articles Cited by Real-World Persistence and Effectiveness of Upadacitinib versus Other Janus Kinase Inhibitors and Tumor Necrosis Factor Inhibitors in Australian Patients with Rheumatoid Arthritis.Youssef P, Ciciriello S, Tahir T, Leadbetter J, Butcher B, Calao M, Walsh N, O'Sullivan C, Smith T, Littlejohn G.Youssef P, et al.Rheumatol Ther. 2025 Feb;12(1):173-202. doi: 10.1007/s40744-024-00736-4. Epub 2025 Jan 6.Rheumatol Ther. 2025.PMID: 39757285 Free PMC article. Cancer Risk in IBD Patients Treated with JAK Inhibitors: Reassuring Evidence from Trials and Real-World Data.Puca P, Del Gaudio A, Iaccarino J, Blasi V, Coppola G, Laterza L, Lopetuso LR, Colantuono S, Gasbarrini A, Scaldaferri F, Papa A.Puca P, et al.Cancers (Basel). 2025 Feb 21;17(5):735. doi: 10.3390/cancers17050735.Cancers (Basel). 2025.PMID: 40075582 Free PMC article.Review. The role of sequential biologic therapy in rheumatoid arthritis: a systematic review and meta-analysis of efficacy, safety, and predictive factors.Salieva RS, Zhumabaeva S, Isakov U, Sakibaev K, Ergesheva A, Taalaibekova E, Esenalieva Z, Mamasaidov A.Salieva RS, et al.Clin Rheumatol. 2025 Sep 11. doi: 10.1007/s10067-025-07636-0. Online ahead of print.Clin Rheumatol. 2025.PMID: 40932574 Review. Gastrointestinal Perforations Associated With JAK Inhibitors: A Disproportionality Analysis of the FDA Adverse Event Reporting System.Goldman A, Raschi E, Druyan A, Sharif K, Lahat A, Ben-Zvi I, Ben-Horin S.Goldman A, et al.United European Gastroenterol J. 2025 May;13(4):566-575. doi: 10.1002/ueg2.12736. Epub 2024 Dec 30.United European Gastroenterol J. 2025.PMID: 39736095 Free PMC article. Efficacy and Safety of Upadacitinib in Patients With Moderate-to-Severe Atopic Dermatitis: Phase 3 Randomized Clinical Trial Results Through 140 Weeks.Irvine AD, Prajapati VH, Guttman-Yassky E, Simpson EL, Papp KA, Blauvelt A, Chu CY, Hong HC, Gold LFS, de Bruin-Weller M, Bieber T, Kabashima K, Rosmarin D, Sancho C, Calimlim BM, Grada A, Yang Y, Wu X, Levy G, Raymundo EM, Teixeira HD, Silverberg JI.Irvine AD, et al.Am J Clin Dermatol. 2025 Sep 3. doi: 10.1007/s40257-025-00975-3. Online ahead of print.Am J Clin Dermatol. 2025.PMID: 40900410 See all "Cited by" articles References Smolen JS, Aletaha D, McInnes IB. Rheumatoid arthritis. Lancet 2016;388:2023–38. 10.1016/S0140-6736(16)30173-8 - DOI - PubMed Fraenkel L, Bathon JM, England BR, et al. . 2021 American College of Rheumatology guideline for the treatment of rheumatoid arthritis. Arthritis Care Res (Hoboken) 2021;73:924–39. 10.1002/art.41752 - DOI - PMC - PubMed Smolen JS, Landewé RBM, Bergstra SA, et al. . EULAR recommendations for the management of rheumatoid arthritis with synthetic and biological disease-modifying antirheumatic drugs: 2022 update. Ann Rheum Dis 2023;82:3–18. 10.1136/ard-2022-223356 - DOI - PubMed Parmentier JM, Voss J, Graff C, et al. . In vitro and in vivo characterization of the JAK1 selectivity of upadacitinib (ABT-494). BMC Rheumatol 2018;2:23. 10.1186/s41927-018-0031-x - DOI - PMC - PubMed Burmester GR, Kremer JM, Van den Bosch F, et al. . Safety and efficacy of upadacitinib in patients with rheumatoid arthritis and inadequate response to conventional synthetic disease-modifying anti-rheumatic drugs (SELECT-NEXT): a randomised, double-blind, placebo-controlled phase 3 trial. Lancet 2018;391:2503–12. 10.1016/S0140-6736(18)31115-2 - DOI - PubMed Show all 40 references Publication types Randomized Controlled Trial Actions Search in PubMed Search in MeSH Add to Search Clinical Trial, Phase III Actions Search in PubMed Search in MeSH Add to Search Multicenter Study Actions Search in PubMed Search in MeSH Add to Search Comparative Study Actions Search in PubMed Search in MeSH Add to Search MeSH terms Adalimumab / administration & dosage Actions Search in PubMed Search in MeSH Add to Search Adalimumab / adverse effects Actions Search in PubMed Search in MeSH Add to Search Adalimumab / therapeutic use Actions Search in PubMed Search in MeSH Add to Search Adult Actions Search in PubMed Search in MeSH Add to Search Aged Actions Search in PubMed Search in MeSH Add to Search Antirheumatic Agents / administration & dosage Actions Search in PubMed Search in MeSH Add to Search Antirheumatic Agents / adverse effects Actions Search in PubMed Search in MeSH Add to Search Antirheumatic Agents / therapeutic use Actions Search in PubMed Search in MeSH Add to Search Arthritis, Rheumatoid / drug therapy Actions Search in PubMed Search in MeSH Add to Search Double-Blind Method Actions Search in PubMed Search in MeSH Add to Search Drug Therapy, Combination Actions Search in PubMed Search in MeSH Add to Search Female Actions Search in PubMed Search in MeSH Add to Search Heterocyclic Compounds, 3-Ring / administration & dosage Actions Search in PubMed Search in MeSH Add to Search Heterocyclic Compounds, 3-Ring / adverse effects Actions Search in PubMed Search in MeSH Add to Search Heterocyclic Compounds, 3-Ring / therapeutic use Actions Search in PubMed Search in MeSH Add to Search Humans Actions Search in PubMed Search in MeSH Add to Search Male Actions Search in PubMed Search in MeSH Add to Search Methotrexate / administration & dosage Actions Search in PubMed Search in MeSH Add to Search Methotrexate / adverse effects Actions Search in PubMed Search in MeSH Add to Search Methotrexate / therapeutic use Actions Search in PubMed Search in MeSH Add to Search Middle Aged Actions Search in PubMed Search in MeSH Add to Search Treatment Outcome Actions Search in PubMed Search in MeSH Add to Search Substances upadacitinib Actions Search in PubMed Search in MeSH Add to Search Adalimumab Actions Search in PubMed Search in MeSH Add to Search Heterocyclic Compounds, 3-Ring Actions Search in PubMed Search in MeSH Add to Search Antirheumatic Agents Actions Search in PubMed Search in MeSH Add to Search Methotrexate Actions Search in PubMed Search in MeSH Add to Search Associated data ClinicalTrials.gov/NCT02629159 Actions Search in PubMed Search in ClinicalTrials.gov Related information MedGen PubChem Compound (MeSH Keyword) LinkOut - more resources Full Text Sources HighWire Full text links[x] HighWire [x] Cite Copy Download .nbib.nbib Format: Send To Clipboard Email Save My Bibliography Collections Citation Manager [x] NCBI Literature Resources MeSHPMCBookshelfDisclaimer The PubMed wordmark and PubMed logo are registered trademarks of the U.S. Department of Health and Human Services (HHS). Unauthorized use of these marks is strictly prohibited. Follow NCBI Connect with NLM National Library of Medicine 8600 Rockville Pike Bethesda, MD 20894 Web Policies FOIA HHS Vulnerability Disclosure Help Accessibility Careers NLM NIH HHS USA.gov
189372
https://www.cnblogs.com/biaogejiushibiao/p/12315921.html
博弈论 - 麦奇 - 博客园 Typesetting math: 100% 会员 众包 新闻 博问 闪存 赞助商 HarmonyOS Chat2DB 所有博客 当前博客 我的博客我的园子账号设置会员中心简洁模式 ...退出登录 注册登录 Live2d Test Env 麦奇 乐观、负责、勇敢、诚实、努力、友善、包容、理解 博客园 首页 新随笔 联系 订阅 管理 博弈论 阅读目录 资源汇总 第一讲 导论—五个入门结论 第二讲 学会换位思考 第三讲 迭代剔除和中位选民定理 第四讲 足球比赛与商业合作之最优反应 第五讲 纳什均衡之坏风气与银行挤兑 第六讲 纳什均衡之约会游戏与古诺模型 第七讲:纳什均衡之伯川德模型与选民投票 第八讲:纳什均衡之立场选择、种族隔离与策略随机化 第九讲:混合策略定义及其在网球比赛中的应用 第十讲 混合战略棒球,约会和支付您的税 第十一讲 进化稳定:合作,突变,与平衡 第十二讲 进化稳定:社会公约,侵略,和周期 第十三讲 序贯博弈:道德风险,激励和饥饿的狮子 第十四讲 逆向归纳:承诺,间谍,和先行者优势 第十五讲 逆向归纳:国际象棋,战略和可信的威胁 第十六讲 逆向归纳:声誉和决斗 第十七讲 逆向归纳:最后通牒和讨价还价 第十八讲 不完全信息:信息集和子博弈完美 第十九讲 子博弈精炼均衡:招商引资和战略投资 第二十讲 子博弈精炼均衡:消耗战 第二十一讲 重复博弈:合作与最后一局游戏 第二十二讲 重复博弈:作弊,惩罚和外包 第二十三讲 非对称信息:沉默,信号和教育之苦 第二十四讲 非对称信息:拍卖和获奖者的诅咒 参考资料 回到顶部 资源汇总 ① 视频资源:网易公开课频道 目前该站点汇集了大量的开放课程,内容丰富。 网易——耶鲁博弈论、网易耶鲁博弈论,字幕只到 16 讲,但优势在于随时可以播放,只要有网络随时可以观看,且没有广告人人影视开放课程——耶鲁博弈论上传此笔记时24 讲字幕已齐全,感觉这套字幕翻译质量很好,且保持了风格的统一,很适合学习的版本。且资源支持的下载方式多样,资源的健康程度也不错。我个人采用的即是这套资源。 资源包: 《策略与博弈论》作者:杜塔(含练习).Strategies_and_games-theory_and_practice(Dutta).pdf 英文版 课程所提到的影片: 《美丽心灵》拉塞尔·克劳和詹妮弗·康纳利搭档演绎约翰·纳什的一生,也就是本课程的核心概念纳什均衡的提出者。 《奇爱博士》库布里克三部曲之一,另外两部为《2001 太空漫游》《发条橙》《谍影重重三部曲》可以算是马特·戴蒙的代表作吧,本人最喜欢第三部。《特务风云》马特·戴蒙 Ben 的玩笑“耶鲁人都是间谍,间谍都是神经病。”《生活多美好 / 风云人物》 也就是 Ben说的《美丽人生》,也就是在第五讲最后的时候 Ben 介绍的那个挤兑问题。不过不是他说的银行而是 George Bailey(詹姆斯·斯图尔特 James Stewart)经营的“房屋贷款合作公司”,他为了大多数人的梦想放弃了自己的梦想……看到最后真的很令人感动。 《白雪公主》……就不上种子了,Ben 说“他们要是看了这部电影,喝咖啡时都不好意思说出来……”同时鉴于拉塞尔·克劳是我个人最欣赏的男演员之一,推荐他的几部,可以对比一下与《美丽心灵》中差异很大的人物塑造。 《角斗士》 《国家要案》 《洛城机密》 《三日危情》 回到顶部 第一讲 导论—五个入门结论 Introduction: five first lessons 本讲说是五个结论,但没有想象中的那么严肃,其中结论 4 更是在开玩笑。 策略形式:行为影响结果,然而结果不仅取决于你的行为,还取决于其他人的行为。 《策略与博弈》 普拉伊特·杜塔《策略》 乔尔·沃森 《战略思想》 Thinking Strategically Ben 强烈推荐中文有两个版本,一个是王则柯女儿翻的《策略思维》,另一个是我们学校董志强老师翻的《妙趣横生博弈论》,作为入门书籍的确很不错。 例:成绩博弈 在你同桌不知道的前提下进行选择,若你选择 α,他选择 β,则你得 A,他得 C 若你们同时选择 α,则他们都得 B-;若你们同时选择 β,则你们都得 B+。 图 01-01 我方成绩 图 01-02 对手成绩 图 01-03 单元格内,第一位是我方的成绩,第二位是对手的成绩,包含游戏所有内容的矩阵 图 01-04 数字表示,代表效用或者功利,更直观的反应收益 A 代表 3 个单位效用,后面以此类推只关心自己的成绩的人——Evil gits 有的书译作恶棍不论对手作出什么选择,选择 α 的收益永远优于 β 当对手选择 α 时,选择 α—0 > 选择 β—-1 当对手选择 β 时,选择 α—3 > 选择 β—1如果 α 的结果严格优于 β,那么 α 相对于 β 是严格优势策略 结论 1:不选择严格劣势策略,原因是每次博弈会得到更好的收益。 本案例中人们不会选择劣势策略,反而选择优势策略,使总结果变得糟糕经济学 115,导致不充分的结果(Inefficient)即帕雷托效应公认的译法是帕累托效应,这里应该叫帕累托无效率,描述资源配置无法达到最优化的状态。 经典模型:囚徒的困境 A 认罪,B 不认罪,A 释放,B 判 5 年,反之亦然。都不认罪,各判 1 年,都认罪,各判 2 年。 结论 2:理性选择导致次优的结果。 Rational choices can lead to bad outcomes. 协商难以达成目的的原因不是缺少沟通,而是没有强制力。黑手党在书面协议不受保护的地方不断壮大,作为法律强制力的补充,维系所有合同不论是否合法。 Indignant Angel 愤怒天使 参看上一个收益矩阵 (A,C)我方获得 A 成绩,对手获得 C → 3 − 4 = −1 -4:负罪感导致的负向收益(C,A)我方获得 C 成绩,对手获得 A → −1 − 2 = −3-2:无法向父母解释这样的成绩 图 01-05 人们在乎的东西不同了,所以得到了完全不同的。协和谬误 协调问题(coordination problem),后续课程中会有更进一步讨论。 结论 3:汝欲得之,必先知之。 You can't get what you want , till you know what you want. 永远选择优势策略,选择非劣势策略,损失小,如果对手有优势策略则应以此作为选择策略的指导。 Evil gits VS. Indignant Angel 图 01-06 假设 me = Evil gits Indignant Angel VS. Evil gits 图 01-07 假设 me = Indignant Angel站在我放的角度分析没有优势策略当对手选择 α 时,我方选择 α—0 优于选择 β—-3当对手选择 β 时,我方选择 β—1 优于选择 α—-1 结论 4:耶鲁的学生很自私。 Yale students are evil. 换位思考 当我方选择 α 时,对手选择 α—0 优于选择 β—-1 第一行红色之于蓝色当我方选择 β 时,对手选择 α—3 优于选择 β—1 第二行红色之于蓝色不论我方选择 α 还是 β,对手选择 α,都是他的优势策略那么基于对手会选择优势策略的前提,选择我方的策略: 选择 α—0 优于选择 β—-3,即第一列的黄色 结论 5:站在别人的立场去分析他们会怎么做 Put yourself in other's shoes and try to figure out what they will do. 选数游戏:从 1 到 100 之间选择一个号码填到下面的方框内,不要让你的同桌看到,我们会计算全班的平均数,谁选的数字最接近平均数的 2/3,谁就是赢家。 回到顶部 第二讲 学会换位思考 Putting yourselves into other people'sshoes 开场提到的几个案例,囚徒困境的类似博弈:合作完成家庭作业,动机每个人都想偷懒价格竞争,两家企业都想削弱对方公共问题,公共资源的使用。 对于此问题的延伸可以参阅《博弈与策略》 P85 第七章 应用:公共问题如何才能形成博弈? 形成博弈要素:参与人(players),规定表述法 i 、j; 参与人的策略(strategies),规定表述法 、所有可能的策略集合 ,区别于参与人的策略 ,此处用大写 S 来表示,如上节课选数游戏中, = {1,2,3 … … 100} 某一次博弈 s,不带下标的小写 s 表示,称为策略组合(a strategy profile),也称策略向量、策略列表、策略剖面) 收益(payoffs) 取决于参与人 1 的策略一直到参与人 N 的策略,这些都是影响参与人 i 的的因素,当然也包括他自己的策略,记作( ,…… , …… )简写为 ( ) ,由策略组合决定(受所有参与人策略的影响)− 除了 i 外其他参与人每人的策略,因为有时候考虑在 和对手在不同选择下的收益是很有必要的。 选数游戏中以标准形式描述博弈: 5 美元 − 误差 ( ) = 0 用一个例子来熟悉使用符号语言来描述博弈 图 02-01 参与人:1,2 策略集合: S1 = { 上, 下 };S2 = 左, 中, 右 收益: U1 = 上, 中 = 11;U2 = 上, 中 = 3 严格优势策略定义: 参与人 i 的策略 ’严格劣于参与人 i 的另一个策略 ,在其他参与者选择 − 时, 选择 的收益 ( )严格优于此情况下 ’的收益 ( ’),对所有 − 均成立。 图 02-02 在防御者的角度没有优势策略,而站在攻击者——汉尼拔将军的角度存在优势策略,但并非严格优势,只是一个弱优势。 引用杜塔教授给出的定义 选数游戏:具体规则见第一讲 没有写下 common knowledge 和 mutual knowledge 的区别,这是很重要的一点 剔除[ 68,100 ],因为只有当所有人都选 100 时,100 的 2/3——66 又 2/3,才是个合理的答案。剔除劣势策略,剩余的选择[ 1,67 ],在这样的情况下,同理 [ 45,67 ] 也被剔除了。 [ 45,67 ] 策略在原博弈中并不是弱劣势的,可是一旦我们排除掉了[ 68,100 ],它们就成为了劣势策略,即弱劣势策略。 剔除[ 68,100 ],是一种直接思考;同时作为一个理性参与人的选择。 剔除[ 45,67 ],则是站在别人的角度去思考的结果,因为对手不会选择劣势策略。同时考虑到你的对手也是一个理性的参与人。 不断重复这个过程,最终会得到 1 的结果。 回到顶部 第三讲 迭代剔除和中位选民定理 Iterative deletion and the median-votertheorem 选举案例: 假设两个候选人,一系列政治主张中,共有 10 个立场,每个政治立场都有 10%的得票,且平均分布,选民会投票给离他们最近的候选人。 当你自己选择其中一个时,获得全票,对手和你同时选择时,你们两人均分选票,非选择区域靠近那个候选人,该候选人获得全票,若该区域与两个候选人等距,则选票均分。 举例如图 03-01,参与人 1 选择 2 号立场,赢得本立场的全部选票,同时 1 号立场选民将选票全都投给离他们最近的参与人 1,同理参与人 2 赢得了 4 到 10 号立场的全部选票,在 3 号立场上,两个参与人距离相等,均分选票。若两个参与人都选择 3 号立场,结果一致。 讨论: 1 立场劣于 2 立场 1(1,1) = 50% < 1(2,1) = 90% 括号内第一位为我方选择立场,第二位为对手立场; 1(1,1),我方选择 1 号立场,对方选择 1 号立场时,我方的收益;此处对比的是在对手选择 1 号立场时,我方选择 1 号立场与 2 号立场的区别。 1(1,2) = 10% < 1(2,2) = 50% 1(1,3) = 15% < 1(2,3) = 20% 1(1,4) = 20% < 1(2,4) = 25% 当选择立场>2 时,选择 1 号立场劣于选择 2 号立场,5%同理可证明 9 号立场严格优于 10 号立场 当对手选择 1 号立场时,对比我方选择 2 号、3 号立场的区别: 1(2,1) = 90% < 1(3,1) = 85% 当剔除劣势策略 1 立场和策略 10 立场,选择策略 3 立场严格优于策略 2 立场。 可自行论证当对手选择 2,3,……10 时我方选择 2 号、3 号立场的区别 按照以上方式迭代剔除劣势 2 和 9;3 和 8;4 和 7;最后只剩下 5 和 6 迭代剔除劣势策略 主旨在对立场的换位思考,推测对手的行为策略,同时想想对手会站在你的立场,反复此过程,最终结果往往会导致唯一的选择。 《策略与博弈》 P51-P52 正式的定义 预测结果是候选人会挤在 10 个立场中的中立地带,在政治学中这被称为中间选民定理。(Median Voter Theorem)也叫中间人投票定理,也可以通过偏好推导出来。 模型存在的问题:现实中选民并非均匀分布;选民常根据候选人的性格而非政治立场来进行投票,政治立场仅仅是单一维度;只适用于两个候选人的情况;同时存在弃权票;选民未必相信候选人所声明的立场。多维度在政治课程中有完善的模型,本课程不讨论。 “单一维度,非左即右”说到非左即右,更正式一点的说法是两难谬误,在此个人推荐一本书《学会提问——批判性思维指南(第七版)》,作者 M.Neil Brtuart Stuart M.Keele,中国轻工业出版社。严重同意!我认为此书是培养 critical thinking 的必读书 建立模型的目的:为了更好地描述事实激发灵感,模型由重要的事实抽象而来,逐步增加约束条件完善模型观察结果,比较分析结果的变化。 个人非常喜欢这个观点,和项目管理的 PDCA 循环一个道理,plan、do、check、action 最佳对策 Best Response 一般译作最优反应,相应动态博弈中先行者的策略是反应对应 施锡铨先生翻译的《策略与博弈》中采用的也是该译法,后文不再特别注释修订 在这个博弈中不存在劣势,不要采用劣势策略和迭代剔除劣势策略的方法在此不适用 选择 U 是在对手选择 L 的 BR(最优反应)选择 M 是在对手选择 R 的 BR 对手的选择 L、R 等可能(1/2,1/2),我方的收益 U: 5 × 12 + 0 × 12 = 52 M: 1 × 12 + 4 × 12 = 52 D: 4 × 12 + 2 × 12 = 3 假设不等可能(2/3,1/3)重新进行预期收益计算,可自行运算。 U M D 直线代表对手选 R 的不同概率上,我方的预期收益若对手选 R 的概率小于 X,选 U 若对手选 R 的概率大于 Y,选 M 若在 X、Y 之间则选择 D X、Y 的解,每条直线可通过两点坐标建立直线方程,两直线联立即可得出交点坐标 解得:X=1/3 回到顶部 第四讲 足球比赛与商业合作之最优反应 Best responses in soccer and businesspartnerships 点球博弈: 4 代表进去的概率1 , = 4 向左射门,向左扑救,进去的概率为 40%使用第三讲同样的方式在平面直角坐标系中绘制图像 红线射手从左路射门的预期收益,绿线—右路,蓝线—中路 门将右扑救射手左射门仍有 90%的入射率,考虑 10%的射飞 对图形的分析,当守门员向有扑救的概率小于 1/2 时,BR 为向右射门,在上面的图形中可以看出从中路射门永远都不是 BR。 针对彩色线条的结论:不要选择任何信念下都非最优反应的策略,即蓝色线条,在任何区间内都不是最优反应PS:此模型忽略右脚习惯 力量提高带来精准性的降低 向中央射门反倒成为最优选择 大力射门出现的概率变化 8—3;3—8;7—7如虚线位置中间三角形区域对应的 X 轴区间中(即两个橘黄点间的范围)射为最优反应 最优反应参与人针对对手策略的定义: 实际上就是用 VNM 效用函数进行比较 冯·诺依曼—摩根斯坦(Von Neumann and Morgenstern)效用函数 见《策略与博弈》 P21 预期收益,此案例中,在参与人 i 持有信念 p 的情况下,他选择左路攻门的预期收益等于,门将扑向左路的概率乘以两人都选择左路下,参与人 i 的收益,在加上门将扑向右路的概率乘以门将扑向右路参与人 i 左路进攻时,参与人 i 的收益。 合伙人博弈: 两个参与人都是公司股东,各持有公司 50%的股份,供应合伙关系;每个股东要选择对公司投入精力,以“小时”表示,策略集合 Si=[0,4],即可选择0 到 4 间任意实数“小时”的投入,这是一个连续区间,不是同于选数游戏中的只能选整数。 同理可得 根据s2的定义域(策略集合)[0,4],绘制参与人 1 在s2情况下的 BR,即红线同理可得参与人 2 在s1情况下的 BR,即蓝线 [0,1) U (2,4]永远不会成为参与人 1 的最优反应,基于参与人不会选择劣势策略,剔除区域如下图。 同理剔除参与人 2 的非最优反应,剔除区域如图 取两个剔除区域的交集,剩下就只有中间的一小块区域 将这块小区域进行放大,如图 04-08 新生成的图像除了点坐标不同外和初始图像完全一样,再次剔除非最优反应,迭代剔除最终将归为一点。 此处介绍的边际收益与边际成本,鉴于本课程第六讲:古诺的双寡头模型中会涉及该知识,我个人将在第六讲的笔记中补充一部分经济学的知识。 存在外部性(externality) 上图中的交点即是著名的纳什均衡点,在此处参与人们都采用了自己的最优反应。 回到顶部 第五讲 纳什均衡之坏风气与银行挤兑 Nash equilibrium: bad fashion and bankruns 纳什均衡定义: 策略组合是一个集合,该集合包含每个参与人的一个已选策略,用 1∗, 2∗,…… ∗ 表示,纳什均衡(简写为 NE—Nash Equilibrium),是满足下列条件的策略组合,对于任意一个此集合内的参与人 i ,她所选择的策略 ∗是其它参与人所选择策略的最优反应,其它参与人的策略用 −∗ 表示。 应该是最优反应,定义不能有问题啊,简单地说 NE 是一组策略,其中每个 player 所选的策略都是对其他 player 所选策略的最优反应 学习 NE 的动机:不为当时做出的决定后悔,因为已经采取了最优反应。 应该是各个 player 选择 NE 的动机;同样重要的一点是 NE 是自我实现的(self-fulfilling/self-enforcing)任何参与人都严格不会改变策略,改变策略严格不会使参与人获得增益。其他参与人不改变行为的前提下,自己改变行为并没有任何好处。 找出最优反应: NE = (M,C) NE 与优劣的联系: α 严格优于 β NE= ( α,α ) 严格劣势永远不是最优反应,最优反应才可以出现 NE。 博弈会朝着趋向于一个均衡的方向自然发展,结果(self-enforcing)不断趋向一个NE。 寻找 NE 的一个有效方法是猜想与验证(guess and check)较劣的不投资均衡相当于较优的 NE 处于帕累托劣势协调之所以能达成在于他不同于囚徒困境,它没有去说服人们采取一个严格劣势策略。 协调博弈 coordination game 协调谬误 《美丽人生》 a wonderful life 吉米·斯图尔特 Jimmy Stewart 说服人们达成一种较优的 NE,电影种子见资源包 回到顶部 第六讲 纳什均衡之约会游戏与古诺模型 Nash equilibrium: dating and Cournot Going to the movies B:The Bourne Ultimatum 谍影重重 G:The Good Shepherd 特工风云 S:Snow White and the Seven Dwarfs 白雪公主 “但我课不觉得现代的女性四处游荡,等待你的白马王子出现是个好策略。” If you are doing that strategy, take it from a Brit, most princes are as dumb as toast, not worth waiting for. 如果你真的采取这个策略,记住这句英国的俗语吧,王子和土司一样蠢,不值得你去等的。 女方想去看《谍影重重》-B,而男方则倾向于《特工风云》-G,双方谁都不像去看《白雪公主》-S,同时他们都希望两个人能一起去观影,否则没有收益。“如果两个人协调好一起去看《白雪公主》,你喝咖啡时都不好意思提这事。” S 对两个人皆为劣势决策,剔除 𝑁𝐸 = (𝐵𝑈,𝐵𝑈);𝑁𝐸 = (𝐺𝑆,𝐺𝑆) 不同之处,上一讲的博弈只是协调博弈,无利益冲突 性别大战( Battle Of The Sexes) 古诺的双寡头模型(Cournot Duopoly) 《策略与博弈》 P69 参与人:两家公司; 策略:某种同质产品产量, 1, 2分别表示两家公司的策略;成本计算 c × ,c 为生产一个单位产品的成本; 市场定价的两个参数 a,b 价格 p = a − b ( 1 + 2 ) 两家企业生产的越多,该产品的市场价格也就越低; 收益: 𝑈1(𝑞1,𝑞2) = 𝑝 × 𝑞1 − 𝑐𝑞1 收入—成本 将价格表达式带入上式: 𝑈1(𝑞1,𝑞2) = 𝑎𝑞1 − 𝑏𝑞1 2 − 𝑏𝑞1𝑞2 − 𝑐𝑞1 此时是公司 1 对公司 2 产量(0)的最优反应,即垄断产量 完善价格与产量图像 完全竞争产量,需求曲线与边际成本的交点,此时价格等于成本,当一家公司的产量达到该点时,另一家公司的 BR 就是停产,否则会使产品的价格低于成本价。垄断产量,边际收益与边际成本的交点,即点 d 本打算在 v_2.0 里讨论一下边际收益等于边际成本时利润最大的问题,但后来查了一下百度文库和智库的相关条目,解释的非常清晰,此处本人不再注解,给出几个链接供非经济专业且有兴趣的读者深化理解。 公司 2 每种产出下公司 1 的最优反应,令𝑞1 ∗ = 𝑞2 ∗ 解得: 边际成本与需求曲线的交点,完全竞争产量 古诺在纳什出生前 100 多年即解出该博弈的答案 古诺博弈不同于第五讲中的合伙人曲线,合伙人曲线是向上倾斜的 这不是一个策略互补博弈,而是一个策略替代博弈 垄断产量使行业利润最大化,两点如上图 α,β 两点连线的中点(红色的点)所分别对应的产量(水蓝色的点),各生产一半亦可实现行业利润最大化,问题签订这样的限产协议是违法的,私下达成协议仍然存在问题。 一方会根据另一方的产量,如下图所示公司 1 根据公司 2 的 A 点的产量(通过协议达成的结果——垄断产量的一半),选择本公司最优反应曲线(红线)所对应的产量——C 点 同样公司 2 会根据公司 1 的 C 点产量,来选择在本公司最优反应曲线(蓝线)所对应的产量。 此处就不作图了 一方违约增产造成另一方同样根据对方产量依照最优反应曲线来生产产品,反复迭代无限逼近纳什均衡,因此通过私下协议来维持垄断产量是很困难的,因为缺乏强制力,双方都有违约的动机。 完全竞争产量 > 古诺产量 > 垄断产量 除了产量还有价格的比较:完全竞争价格 < 古诺价格 < 垄断价格 回到顶部 第七讲:纳什均衡之伯川德模型与选民投票 Nash equilibrium: shopping, standing andvoting on a line 伯川德竞争 (Bertrand competition) 古诺是产量上的竞争,伯川德则是在价格上的竞争 参与人:生产相同的产品的两个公司成本是固定的边际成本,生产 1 个单位产品消耗成本 c 策略:定价,本例中用 1代表公司 1 的价格,用 2代表公司 2 的价格,注意此处不同于前面课程用 s 来表示参与人的策略每个公司可以把价格设定在 0 ≤ ≤ 1 ;即为前面课程的策略集合设定价格根据需要来调整产量 产量的制定: Q(p ) = 1 − p 为两家公司定价较低的价格公司 1 产品的需求量 注意:现实并非完全符合以上的表达式,为了便于研究对于模型做了很多强制性假设来简化 收益:𝑞1 × 𝑝1 − 𝑞1 × 𝑐 = 𝑞1( 𝑝1 − 𝑐 ) 为了找到 NE,首先要找到公司 1 的 BR,是关于公司 2 价格的函数 分段函数: 第一段,公司 2 定价低于成本价销售时,公司 1 定价必须高于 2才能避免销售每件产品都亏损,同时也意味着产品没有销量——退出市场。 第二段,当公司 2 的定价高于成本时,公司 1 只需要比该价格低一点点,用 来表示,即可占领市场。且应当低于垄断价格,因为垄断价格才是最大利润。 第三段,当公司 2 的价格高于垄断价格时,公司 1 选择垄断价格。 第四段,当公司 2 的价格等于边际成本时,公司 1 选择大于或等于边际成本 𝑁𝐸 = ( 𝑝1 = 𝑐,𝑝2 = 𝑐 ) 这个结果与完全竞争非常相似,尽管只有两家公司。这个结果叫做伯川德悖论(Bertrand Paradox) 与上次相同的设置,但不同的策略集合设定,得到一个完全不同的结果。 此处把完善模型作为了作业留给学生去完成了。 线性城市模型(Linear city model) 一个路贯穿城市,两个公司分别坐落在 0、1 点,消费者 y 到公司 1 的距离为 y,到公司 2 的距离为 1-y,假设每个消费者买且只买一个产品。消费者会选择对他而言总成本最小的 例如:在 y 点的消费者,如果从公司 1 购买则他们支付 1 + 2,产品的价格 1,和交通成本 2;到公司 2 购买则需要支付 1 + (1 − )2,交通成本以距离的平方的速率增长。 作业为解出公司针对每一个其可能设定的价格,它的需求是什么,并找到所有的纳什均衡。 候选人选民模型(Candidate-Voter model) 假设选民在线上平均分布,选票的获得与第三讲中一致,与该模型的区别:①候选人的数目不固定;②候选人不能选择他们的立场;假设每个选民是一个潜在的候选人 参与人:选民 策略:是否参选(选民将选票给与最近的候选人,得票最多者当选,平局掷硬币)收益:获胜赢得奖励 B,参选付出成本 C,且 B>2C;若选民不参选获胜者的立场距离该选民越远,则该选民将承受越重的负面效应,若该选民在线上 X 点,获胜者在 Y 点,则承担−| − |的成本,两点间距离的负向效应,也就是对方当选后给未参选的选民造成郁闷程度。 例如:三种可能的情况 ①Mr.x 参选并获胜,他的收益为 B − C ②Mr.x 参选,但 Mr.y 获胜,Mr.x 的收益−𝐶 − |𝑋 − 𝑌| ③Mr.x 不参选,但 Mr.y 获胜,Mr.x 的收益为−|𝑋 − 𝑌| 假设 = 2;=1;=1 ;选民为 17 人,每一个立场的价值为 (1/17 )$ 图形类似第三讲的图形,不过立场变为了 17 个 假设位于中间的选民参选,那么对于其他任何一个选民来说再参选都不是一个 NE 均衡,因为相对于第二个参选的选民不参选都是更好的收益。相当于 Mr.x 的②③两种情况的对比。 如果非中心点的选民参选,同样也不是个 NE,因为中心点的选民参选相对于不参选而言成为了他的优势策略。 假设依照上图分析,两个对称点的选民参选,如 4 号和 7 号,那么 1、2、3 号和 8、9、10 参选将使一个劣势策略,因为他不仅不会当选,且会分掉离自己更近的候选人选票,从而把当选者推向离自己更远的立场。 回到顶部 第八讲:纳什均衡之立场选择、种族隔离与策略随机化 Nash equilibrium: location, segregationand randomization 继续上一讲的候选人模型 结论 1:此模型可能存在多个 NE 并非所有均衡中的候选人都保持中间立场 结论 2:如果左派有一个新的候选人加入,可能会导致右派获胜的概率增大,反之亦然。 三个候选人分别处于 1/6 立场,1/2 立场和 5/6 立场,此时每人当选的概率为 1/3,此时若左派的候选人稍微向 1/6 右侧靠近一点,右派的候选人稍微向 5/6 左侧靠近一点,那么中间立场候选人的选票就会被这两个候选人分掉一小部分,从而使中间候选人被排挤掉。 结论 3:如果候选人太极端就会有新的中间候选人参选。 选址模型(Location model) 假设两个小镇,东镇和西镇;世界仅有两种人,高个和矮个;每种人都有 10 万,每个城镇都只能容纳 10 万人;参与人:高个、矮个策略:选择东镇还是西镇 如果城镇只有参与人是矮个,其他人都是高个,那么参与人的收益为 0,反之亦然;如果是高个和矮个混居,数量都是城镇人口的一半则收益达到最大;如果城镇全是矮个或高个则收益是最大值的一半。 人们可以自由选择想要居住的城镇,如果选择一个城镇的数量超过了容积,则会从所有选择该城镇的参与人中随机抽取,分配到另一个城镇。 例如有 15 万人选择东镇,那么每个人只有 2/3 的概率可以住在这里,另外随机抽取 5 万人,会被分配到西镇去。 ①两个 NE 是种族隔离;一个 NE 是每个城镇中不同人种均匀分布;两者皆为严格均衡,后者稳定性差,“弱均衡”这三种情况下参与人都无法通过改变策略来取得更高的收益 ②临界点(Tipping Point) ③另一个不太现实的均衡,所有人都选择同一个城镇而被随机分配 结论: ·看上去毫无意义的博弈规则,有时是很重要的条件。 ·社会随机分配,其结果要比所谓的自主选择要好。 结论: ①模型中种族隔离的结果,不能作为人们喜欢种族隔离的论据。 ②随机分配(randomization);校车现象(bussing) ③可以通过自下而上的方式实现随机分配。 每个人都通过抛硬币的方式来决定去那个镇子,选址模型的另一个 NE。 抛硬币的不确定性,引入混合策略(Mixed strategies),在这之前接触的都是可选的纯策略。 猜拳博弈 在纯策略(pure)中没有 NE NE 按 1/3 概率选择混合策略 回到顶部 第九讲:混合策略定义及其在网球比赛中的应用 Mixed strategies in theory and tennis 混合策略用 表示, i 表示参与人, 表示采用每个纯策略的概率 用 ( ) 表示在混合策略 下,参与人 i 采用 的概率,即 ( )是 赋予 纯策略 的概率。 例如:上一讲中的猜拳博弈, 可以将纯策略看做是一个特殊的混合策略,即赋予某个策略的概率为 1; 混合策略的收益: 混合策略 的预期收益,每个纯策略预期收益的加权平均数 计算预期收益: 结论:如果一个混合策略是 BR,那么混合策略中的每个纯策略必须也是 BR,也就是说它们的收益必须相同。 定义:一个混合策略( 1∗, 2∗,…… ∗ ),是一个混合策略 NE,当且仅当对任意参与人 i ,在面对 −∗ 时他的混合策略 ∗,是该参与人的 BR。 含义:如果 ∗ 中某个纯策略被赋予正概率,那么该策略本身是一个 BR。 网球博弈: 参与人:Venus Williams and Serena Williams 策略:Venus 可以选择把球打到对手的左侧(反手),或是右侧(正手)收益矩阵如下 例如: 1( , ) Venus 将球打向对手的左侧,而对手判断失误,采取了向右的预判,那么 Venus 得分的机会为 80%,而对手防守得分的机会为 20%。 假设 Serena 右手截击的水平高于左手。不存在纯策略的 NE,寻找混合策略的 NE。 首先来找到 Serena 的 NE 混合策略( ,1 − ),为此需要分析 Venus 的收益 Venus 面对 Serena 采取 ( ,1 − ) 时的收益 L:50 × + 80 × (1 − q) ① R:90 × + 20 × (1 − q) ② 如果 Venus 的混合策略属于 NE,那么选 L 和 R 的收益一定相等,进而预期收益一定相等。 联立 ①② 解得: = 0.6 Serena NE 通过 Venus 收益求出的 Serena 的混合策略,既然知道了 Venus 也采取混合策略,那么我就可以令 Venus 的两种收益相等。 然后计算 Venus 的混合策略 找到 Serena 的 NE 混合策略 ,1 − ,为此需要分析 Serena 的收益 Serena 的收益 l:50 ×q + 10 × (1 − p ) ① r:20 ×q + 80 × (1 − p) ② 联立 ①② 解得:p = 0.7 NE = [ 0.7,0.3 ,( 0.6,0.4 ) ] 如果 Serena 防左的概率大于 0.6,那么 Venus 的 BR 是把球打向右侧如果 Serena 防左的概率小于 0.6,那么 Venus 的 BR 是把球打向左侧 新教练改善了,Serena 打反手球的水平导致的结果:① 直接影响 提高 ② 间接影响、战略影响 降低使用解得 q相同的方式解得新的均衡 q ′ = 0.5 结果证明,间接影响的作用更大使用解得 相同的方式解得新的均衡 p′ = 7/12 < 7/10 《策略与博弈》中考虑混合策略的意义理由 1:混合策略可能优于一些纯策略(这些纯策略本身并不劣于其他纯策略)。P101 理由 2 混合策略的最差情况可能好于所有纯策略的最差情况。P103理由 3:如果我们只限于纯策略,那么,我们也许不能找到博弈的纳什均衡。P104 回到顶部 第十讲 混合战略棒球,约会和支付您的税 Mixed strategies in baseball, dating and paying your taxes 验证上一讲混合策略 p∗ = 0.7 q∗ = 0.6 是否是 BR Venus 面混合策略 ∗是 Serena 的混合策略 ∗的 BR Venus 的收益: Venus 在对手采取( 0.6,0.4 ) 的混合策略下纯策略的收益分别是L: 𝑈𝑉[ L,( 0.6,0.4 )] = 50 × 0.6 + 80 × 0.4 = 62 R:𝑈𝑉(R,(0.6,0.4 )] = 90 × 0.6 + 20 × 0.4 = 62 Venus 采取 ∗的混合策略的收益: 𝑈𝑉[ ( 𝑝∗,1 − 𝑝∗ ),(0.6,0.4 ) ]= 0.7 × 62 + 0.3 × 62 = 62 Venus 不存在改变纯策略的严格有利改变,她任何一个纯策略的收益都与混合策略∗的收益相等。 证明混合策略不存在严格优于 ∗的混合策略,回忆一下上一讲混合策略收益的定义,纯策略、加权平均数 结论:只需要考虑改变纯策略是否严格有利即可。 因为就混合策略本身的定义来说就不会有严格有利的混合策略偏离,两个相同的数怎么加权都是一样的。 apple picking 摘苹果 Yale Rep and see play 耶鲁剧院去看戏 两个纯策略 NE (AP,AP) (REP,REP) 性别大战混合策略下的均衡找出 NE 通过 Nina 的收益来求出 David 的策略 𝑈𝑁 [ AP, (q,1 −q)] = 2 ×q + 0 × (1 −q ) = 2q① 𝑈𝑁 [ REP,(q,1 −q)] = 0 × q+ 1 × (1 − q) = 1 − q② 联立①② 解得: q= 1/3;1 −q = 2/3 同理解得: p= 2/3;1 −p = 1/3 证明 BR 与威廉姆斯姐妹网球博弈的证明方式一致,且该处比较完整。 人们并不是完全随机化的,可以把混合策略看成处于均衡时人们的某些信念。 税收检查: 参与人:纳税人 tax payer;审计员 auditor 策略:如实申报 honestly;隐瞒申报 cheat 审计员的收益: 最好的结果,不审查而纳税人如实申报,收益为 4;抓到漏税收益也为 4;最糟的结果,不审查,但纳税人逃税成功,收益为 0;审查而纳税人如实申报,因为审查是有成本的,因此收益为 2; 纳税人的收益: 瞒报被查出巨大损失-10,逃税成功收益为 4。 纯策略不存在 NE,寻找混合策略的 NE 通过审计员的收益来求得纳税人的策略 联立①② 解得: q = 2/3 同理通过纳税人的收益来求得审计员的策略,解得: = 2 /7 政策试验,提高惩罚,从-10 增加到-20 审计员的收益等式为发生变化,因为他的收益没有变化,也就是说纳税意愿对他是 否选择审计检查无影响。q = 2/3 纳税人的收益:对均衡纳税意愿有影响,决定着纳税人的混合策略是审计员的收益,不改变审计员的收益,当然也就不会改变纳税人的均衡混合策略。 提高对逃税的惩罚并没有提高纳税人的纳税意愿,却降低了审计员的审计概率 1/6 < 2/7 举例:提高逃税的收益,将导致审计概率的提高,因此商学院高收入的教授反而拥有更高的纳税意愿,因为较高的审计概率的存在。 联邦审查率的设计更多的去审查富人,这并不是说明穷人更诚实而富人更。 鉴于国会意愿属于富人阶层,让国会议员获得制定审查率的权利是不明智的,他们可能会报有其他政治目的而非提高税务系统的整体效率。 本讲有两个重点要补充: 1.关于混合策略的三种解释: (1)某个 player 随机化 ta 的策略 (2)某个 player 对另一个人采取某种策略的概率估计 (3)群体中特定参与人的比例 2.求混合策略的方法: (1)设某个 player 采取某个策略的概率,通过令另一个 player 的收益无差异来求这个概率 (2)在给定另一个 player 的混合策略下,对某个 player 的收益函数求一阶条件可以求得另一个 player 的混合策略 (3)无论用上述何种方法,最好算出来了检验一下是否有偏离该混合策略的激励,计算上只需要检验纯策略 回到顶部 第十一讲 进化稳定:合作,突变,与平衡 Evolutionary stability: cooperation, mutation, and equilibrium 进化论 (Evolution) ① 博弈论对生物学的重大影响,尤其在动物行为学中把基因看成策略,把遗传适应性当做收益,好的策略使种群不断壮大,即有适合基因的个体会繁衍,带有不适合基因的个体会灭绝。 将动物的行为(策略)看做是天生,而不是自由选择 ② 生物学尤其是进化生物学,对社会科学产生了重大影响 一个经济学案例: 假设市场中存在这样的公司,这些公司并不关心什么策略能最大化利润,什么策略能尽可能降低成本,它们可能毫无科学根据地选择策略,在竞争激励的市场环境下,只有那些成本较低但利润颇丰适应环境的公司才能得以生存下来。 简单的回忆一下高中生物知识,基因突变是不定向的,而自然选择则是定向的。基因(DNA),存在于细胞核,通过 RNA 将自身的片段输出到细胞核以外,以其上的编码来指导蛋白质合成,从而控制干细胞的功能细胞,形成组织器官,构造生物体。 比如长颈鹿的脖子,可能这个物种最初没有这种特征,但在种群当中极小的一部分个体发生了基因突变,这个突变是不定向性,也就是说可能出现蹄子大的,腿长的,大耳朵的等等。而这个物种的普遍的生存环境下,赖以为生的植物很高大,此时那些脖子长的个体则有更多的机会填饱肚子。 低矮的空间内有更多的竞争者,那么个体分得食物量将受到限制,而在高处则只有这部分数量极少长颈的个体在分享着食物,充足的食物意味着这部分个体的平均体魄比其他非长颈的更好,那么在与天敌或其他致命危险对抗时,这部分的存活几率会更高,基因在种群中的比例也就会逐渐提高, 而这种优势是可以通过遗传给予后代的,随着时间的推移,原来的突变少数成了种群中的大多数,最后完全淘汰那些短颈个体。这并不是说蹄子大的,腿长的,大耳朵的变异没有用,只是说在这个环境中长颈更占优。而环境是自然形成的,它赋予了某些突变基因生存的优势。这也就是自然的定向选择。 公司倒闭和基因灭绝道理是类似的。 简化模型,专注于种内竞争,通过双人对称博弈来进行研究,很大的种群,采取的策略与生俱来,对其进行随机配对。即采取相对成功策略的个体数量会增长,相反则会减少。不存在基因的重新分配。 合作 cooperate 背叛 defect 合作是否是一个稳定策略?假设有1 − 的蚂蚁都是合作型 ②>① → C 相对于 D 不是进化稳定策略 (Evolutionarily Stable),简写为 ES背叛的个体在种群所占的比例会逐步提升,直至全部种群皆为背叛个体背叛是否是一个 ES?做一次反向试验来进行验证 假设有1 − 的蚂蚁都是背叛型 ES? b 本身不是 ES,同时 c 也不是 ES 如果策略 s 或者( s , s )不是 NE,那么策略 S 就不是 ES。即如果 s 是进化 ES,那么( s , s )一定是 ES。 对任意 s′ 都成立,对任意 都成立。 回到顶部 第十二讲 进化稳定:社会公约,侵略,和周期 Evolutionary stability: social convention, aggression, and cycles 社会传统的进化 (evolution of social convention ) 靠左行车还是靠右行车 类似性别大战( Battle Of The Sexes)的收益矩阵 a 使用攻击性策略,不会躲避,b 仁慈性策略,在相撞前会规避一个著名的例子就是 Chicken Game 叫做斗鸡博弈或胆小鬼博弈《策略与博弈》 P33 鹰—鸽 博弈 (强硬—懦弱)NE (b,a) (a,b) 此博弈中不存在对称纯策略 NE ,需要考虑混合策略 混合策略下的性别大战的 见第十讲 摘苹果和看戏剧 单型(Monomorphic):只有一个形态或一个类型——单型种群多态(polymorphic)——混合型种群 策略 是混合策略 ES 混合策略偏离 比如 (2/3 ,1/3) 换为 (1/3 ,2/3)那么结果和混合策略相同,a 的个体突变相对于混合策略结果与混合策略本身的结果 是一样的。所以在混合策略的 NE 里不可能是严格的。 为保证 ES,检验是否满足(b), 对所有可能的混合 变化p′ for all mixed mutations p′ 此处做个一个简单的讨论而没有去使用严格的数学证明,但它已经足够了。 自然界中混合均衡的两个解释 ① 基因本身是随机的 ② 稳定混合也意味着在 ES 中,以这种比例稳定存在海象的例子 SLF (sneaky little) 鹰—鸽之争 Hawk and Dove 奖励 V > 0 战利品 代价 C > 0 D 是一个进化稳定策略(ESS,Evolutionarily Stable Strategy)吗? 第一部检验,NE (D,D)? 不是 NE,所以不是 ESS H 是 ESS? 第一部检验,NE (H,H)? 检查鸽派对鹰派的收益 在这个例子中没有 ESS 三色蜥蜴例子,解释循环维持平衡的过程 印象没错的话,高中生物教材称它为生态平衡,只不过这个概念更为广泛不是在博弈模型中的单纯种内,而是同时考虑种间、外界环境等内容。 回到顶部 第十三讲 序贯博弈:道德风险,激励和饥饿的狮子 Sequential games: moral hazard, incentives, and hungry lions 帽子里的钱 (Cash in a hat) 关于这个游戏的说明请认真听,具体描述请对应下面的树形图(《策略与博弈》中叫展开型,这个名词个人感觉更为恰当)。 树形图只是一个笼统的说法,它可以指决策树又可以指博弈树,前者在决策论使用,后者在博弈论中使用,结构相似但决策论跟博弈论不是一回事。 extensive 也有种译法叫扩展式,跟策略式相对,张维迎的书也有写,都是翻译不同而已。 序贯博弈 (Sequential games) 参与人 2 在作出决定之前知道参与人 1 的决策,且参与人 1 知道这种情况。 个人绘图说明,在本树形图中(请注意与 Ben 的方式略有不同): 1)中间节点没有对每个节点标明参与人 2 可以在此做出选择,而是以一条与参与人2 颜色相同的直线了表明在此处他可以进行策略选择; 2)为节省空间最上面的分支不再采用相同的斜率延伸到与终点处于相同水平位置后写出结果,而是用水平直线来进行延伸; 3)在所有分支中节点都有黑色圆点,无节点则此处不提供策略选择,但遵照上一条会对该处进行水平延伸,以使得所有结果都在一个水平位置方便比较; 4)对于分枝的决策不是画箭头表示,而是直接将该分枝变换颜色,同时加粗线条;后续的所有树形图,除特殊情况,皆以该方式绘制。希望如此能易于阅读。 关键问题是作出预测 (anticipation) 沿着树形图向下看,站在后行动参与人的立场上思考,看下级参与人会有什么动机,找到他们的 BR,再根据树形图倒回来。 即向树的分枝看,然后在回到树的主干上来。 记得 Thinking Strategically 这本书上对逆向归纳法有个很好的说法:looking forward,thinking backward 实例说明: 1.收益站在参与人 2 的角度做出判断,上分枝参与人 2 没有选择权,无需分析;中分枝,1.5 相对 1 是优势;下分枝,3 相对 2 是优势;参与人 2 的分枝选择已用红褐色标出。未标出可以认为已经作为劣势策略被剔除。 2.逆向推进一层站在参与人 1 的角度,可以选择的三个策略分别对应的结果为:0、1、-3。找到参与人 1 的优势决策。 以上方法的名称——逆向归纳(BI,backward induction) 希望得到一个更好的结果,某种动机却阻止我们达成更好的结局,称之为道德风险(moral hazard)。 典型的道德风险 选择限制项目的规模,或者说贷款额度,通过降低规模来降低被骗的风险。 改变 ① 3 ② 3 ② 3 分枝的收益分配,有原来的(3,2)变为(1.9,3.1) 激励设计(incentive design) 动机不是上天赋予人们的,它是由合同双方设计出来的。 “有时大蛋糕的一小块,可能比小蛋糕的一大块要大。” 担保 (collateral) 担保的作用在于,它降低了你不偿还贷款的收益。但却使你过的更好了,因为它改变了其他人的行为,这对你却是有益的。 AD 1066 征服者威廉 登陆英格兰参与人: 诺曼底公爵威廉率领的侵略者 Norman 哈罗德率领的撒克逊防御者 Saxon 可选策略:战斗 (Fighting) 逃跑 (Running Away) 威廉的初始策略:破釜沉舟(Burn);留条后路(Not Burn) 此处将选择策略的颜色换成了更为鲜明的色彩,上面那个图比较小还好,这个图使用柔和的色彩确实差一些了,与参与人相近的鲜艳色彩表明参与人的选择。最后一个层级的策略与前一层相同上分枝为 F。 承诺(commitment) 减少可选策略而改变其他人的行为,改变不了其他人的行为则毫无意义。 《奇爱博士》 Dr. Strangelove 斯坦利·库布里克 Stanley Kubrick“必须要让对手知道。” 节点(Node) 终点(End Node) 连线(edge) 路径(path) 5 号狮子知道没有来自后方的威胁,于是准备放心大胆的吃掉 4 号狮子; 4 号狮子知道背后有个虎视眈眈的家伙,于是只能对着美餐流口水; 3 号狮子预料到 4 号狮子的顾忌,于是悠闲的等着享用 2 号狮子; 2 号狮子不想让 3 号狮子得逞,只能忍饥挨饿; 1 号狮子:“我吃了绵羊还是首领!” 绵羊:“为什么狮子得数量不是偶数。” 回到顶部 第十四讲 逆向归纳:承诺,间谍,和先行者优势 Backward induction: commitment, spies, and first-mover advantages 对于古诺博弈的详细讨论见 第六讲 斯塔克伯格 (Stackelberg) 厂家 2 已经知道 q1,需要选择 q2 厂家 2 针对 q1,按照 BR 曲线,选择与之对应能最大化厂家 2 利益的 q2; 101 厂家 1 知道了这个q 2,又会根据它来调整自己的最优反应——产量 q1,于是厂 家 2 再根据这个 q1′,再决定出 q2′,从而无休止的继续下去。 第一部的思考,站在厂家 1 的角度,它知道任何选择都会导致厂家 2 作出依照规律的相应选择。 策略代换 (Strategic Substitutes) 厂家 1 增产,厂家 2 就作出减产来回应 1 ↑,B 2 1 ↓ 在斯塔克伯格模型,厂家 1 不需要知道厂家 2 的产量也能有理由超过古诺产量继续生产,因为这可以迫使对手减产,对厂家 1 是有利的。 厂家 1 的利润一定会上涨市场上的总量 1 + 2 的影响 根据图像,q 2每减产 1 个单位, q1的增产量多余 1 个单位 此处简单解释一下: 例如左侧的图中直线方程为y = x 此时斜率为tan45°,而右侧直线方程为y = −x,此时斜率为tan135°,此时两个斜率的角度值互为补角。一直觉着在博弈论中说到斜率就很别扭,在此特别注释一下。 斜率问题已经说过,经济学指绝对值,算弹性的时候也一样 这是我外行的一个例证,这个错误就保留下来给非经济类专业的读者做个提醒吧,同时还想说一下 Ben 的博弈论讲的确实很易懂,即便对于非经济专业的听众。 ( q1 + q2) ↑ ,P ↓,厂家 2 的利润下降; 消费剩余(CS,consumer surplus)↑ 数学证明: 需要注意的时上面的两个分支在连接( 2,1) 状态下的博弈树图时需要改一下参与人顺序。而其他任何的状态都可以想象为这两种状态的推广。 “在双方都会玩 NIM 时,永远不要让自己在两堆相等的时候获得选择权。” 回到顶部 第十五讲 逆向归纳:国际象棋,战略和可信的威胁 Backward induction: chess, strategies, and credible threats 策梅洛定理(Zermelo theorem) 两个参与人,完全信息博弈,博弈有限节数 三个结果参与人 参与人 1 有赢策略,不论参与人 2 如何应对 参与人 1 有平局策略,不论参与人 2 如何应对 参与人 2 有赢策略,不论参与人 1 如何应对 此处 NIM 拿子游戏见第十四讲最后的树形图 使用归纳的方法证明(by induction) 把博弈的最大长度用 N 来表示,要在博弈的最大长度上进行归纳证明。 用优势的末节点取代起点假设这个命题对所有这样的博弈,在长度为 N 时都成立正确的字幕是 path≤N 证明所有的长度为N + 1的博弈也都成立 N = 3 N + 1 = 4 子博弈——博弈中的博弈,淡黄色(长度为 3)、淡绿色(长度为 2)的两个区域分别是两个子博弈。 根据归纳假设(induction hypothesis),此博弈(长度为 3 的博弈)有解。假设其解为 W;长度为 2 的博弈有解,假设其解为 L。 上面的博弈可以被转化为: 这是一个长度为 1 的博弈,有解。 如果长度为 N 或更少的博弈有解,那么长度为 N+1 的博弈有解。 Marienbad 石子阵列,N 行 M 列,可供选择的策略,被选中的点,其左、上的所有石子被拿走。如图若选中蓝色的点,淡黄色区域内被移除。参与人交替进行选择,拿到最后一个字的人输 作业:证明,根据策梅洛定理,无论 N、M 等于多少,此博弈都有解 完全信息博弈 (games of perfect information) 在任意一个节点上或者说每个节点上被轮中的参与者,都知道自己处在真个博弈的哪个节点的博弈。这也暗示着,参与者知道如何到达该节点。 纯策略,在一个完全信息博弈里,参与人 1 的纯策略,它是一个完整的行动计划,这个纯策略明确了参与人 1 将要在每个节点上采取怎样的行动。 例如: 这个树形图绘制不采用前面树形图的方式,否则反而不易观察了。后面的简单树形图,同样采用此方法,目的都是方面阅读。 参与人 2 策略:[ l ] [ r ] 笔记存在的问题敬请指正 By Apollo QQ:77981960 Gabriel QQ:460323397 参与人 1 策略:[ 𝑈,𝑢 ] [ 𝑈,𝑑 ] [ 𝐷,𝑢 ] [ 𝑈,𝑑 ] BI [ (𝐷,𝑑),𝑟 ] NE [ (D,d ,r ] [ (D,u) ,r ] NE 和 BI 无法对应,机械地寻找博弈中的 NE,会发现采取的行动很不明智。 另一个例子: Ent 公司可以选择是否进入 Inc 公司的行业,Inc 可以选择是否发动对 Ent 的反击。 NE (i𝑛,N𝐹) (ou𝑡,𝐹) BI (i𝑛,N𝐹) 不应该相信那个生成会反击的人真的就会反击。 (ou𝑡,𝐹) 建立在一个不足信的威胁基础上。 回到顶部 第十六讲 逆向归纳:声誉和决斗 Backward induction: reputation and duels 继续第十五讲最后的例子 加入一些条件,一个公司,处于垄断地位,垄断了十个不同的市场,假如它们有顺序性,垄断者会对第一个尝试进入者发起攻击,从而威慑后面观望者,对于最后一个市场垄断者不会发起进攻,因为没有建立威慑的动机了。 因为不可能去阻止第十个尝试进入者,所以第九个尝试进入者就成了最后一个,逆向归纳所有人都该进入市场。 即使有 (1%)的概率垄断者是疯狂的,他就可以用疯狂的名义吓退进入者。 即使在十个市场都处于垄断地位,人们也会进入并与之竞争,连锁店博弈(the Chain Store Paradox)。 决斗(Duel) 扔海绵 Pi( d ) 参与人 i 在 d 点击中对手的概率 此处用 chome 应用 生成了一个类似图像,用的是 1/2 和 1/3 的指数函数,图中的两个方程除了用于生成图像,无其他用途。 预先抢投是一个关键点 优势定论 Dominance Argument 和 BI A 假设还没有人投出,如果 i 选手知道假设在 d 点 j 选手不会投出,下一轮他就会更近一步,此时 i 选手不会投出。 B 如果 i 选手在 d 点知道 j 选手会在 − 1 点投出,那么他应该投出海绵。 当前轮次的命中率要大于对手在下一轮次的失误率,如此比较是因为当前赢得游戏的概率是击中对手的概率,在下一轮次前进一步赢得游戏的概率取决于下一轮次对手失误的概率。(获胜率之间的比较) 前提是满足 如果 i 选手在 d 点的命中率 ≥ j 选手下一轮在 − 1 点的失误率,则应当投出。 此处得出结论是使用“优势定论 ”的结果,或者按照前几讲说法,剔除劣势策略,占优可解 d∗ 处的矛盾, i 选手无法确定 j 选手是否会投掷,因此无法确定自己的策略。 逆向归纳的推导 d = 0 时 Pi( d ) = 1 → 第一次投掷应该发生在 d∗ 处 有时等待是个好策略。Sometimes waiting is a good strategy. 不要过度自信,也不要迷信先下手为强。 回到顶部 第十七讲 逆向归纳:最后通牒和讨价还价 Backward induction: ultimatums and bargaining 参与人 1,参与人 2 参与人1 向参与人2 给出一个分享1 美元的条件,参与人1 获得S,参与人2 获得1 − S,记作 S,1 − S 。 参与人 2 有两个选择,接受则按 S,S − 1 分配,拒绝 0,0 即使在非常简单的游戏中,使用逆向归纳的时候也必须小心。在现实世界当中,人们除了明显的收益还会关心其他东西。 两期议价博弈 (two period bargaining) 一阶段:1 美元,参与人 1 向参与人 2 给出条件, 1,1 − 1 。 参与人 2 有两个选择,接受则按 1,1 − 1 分配,拒绝则进入二阶段。二阶段:参与人 2 向参与人 1 给出条件 2,1 − 2 参与人 1 有两个选择,接受则按 2,1 − 2 分配,拒绝 0,0 每轮会有一个折损(discounting)—— < 1; 可以假设 = 0.9 带入理解一下 此处讲的一个折现问题,考虑了资金的时间价值,经济上常用于投资方案比选,将不同时期的资金流入与流出折现到一点来进行分析,这也就是财务净现值。 例如:今年的 100 元,在银行存款年利率为 10%的情况下,选择存款,明年将获得 100 元的本金,10 元的利息。也就是说考虑资金的时间价值,明年的 110 元也就相当于今年的 100 元——100 元就是明年 110 元的现值。 网友 Gabriel 在此处的解释相当明白了 给予者、接受者:这里的 offerer 指首先出价的人(这里是 player1),receiver 则指接受价格的人(这里是 player2)。 1-3 期:指 n 期博弈的结果 1 期的 0:若 player 2 拒绝了 player 1 的出价则 ta 将一无所有,因此即使 player 1 提 出(1,0)的分配方案 ta 也会接受 2 期的 δ:如果 2 拒绝了 player 1 的出价,那么到第二轮 ta 将会提出(0,1)的方案并且 player 1 会接受,因此 player 1 会将 1 贴现到今天的值 δ 留给 2,并且双方都接受((1 − δ , ))的出价。下面 n 期的推理都用相同的逆向归纳法进行 分析这类问题时总是假定:在 player 1 提出的价格与 player 2 在下一期得到的价值贴现到本期的值两者相等时,player 2 会接受 player 1 的出价。 两期博弈中参与人 1 向参与人 2 给出的条件(淡绿色点),参与人 2 获得 美元,参与人 1 获得1 − 美元。 如果参与人 1 给与参与人 2 > 1 × 参与人 2 会接受 如果参与人 1 给与参与人 2 < 1 × 参与人 2 会拒绝 如果参与人 2 知道明天可以得到 1 美元,那么参与人 1 今天至少要分给参与人 2 美元。 我个人理解在此处已经和最初的分钱案例不同了,但 Ben 似乎没有做一个明确的转换,此处参与人 1 给出一个参与人 2 一定接受的价格。 其实是一样的,单纯的扩展到无限期讨价还价而已,不是不同的案例 类似上面我举的存款例子,全部的待分金钱是明天的 110 元,参与人 2 明天得到 110 他是可以满意的,因为他占有了全部,那么在利率为 10%的情况下,今天的拿到 100 元实际也就等于明天到自己选择的时候占有了全部。 三期博弈 逐步走向一个议价模型 (alternate offer bargaining) 10 期博弈 收益 如果参与人 2 在第 1 轮拒绝了提议,参与人 2 在第 2 轮给出他的条件,那么就要在第 3 轮博弈中给出条件,我们证实了在第 2 轮博弈中,即如果参与人 2 在第 1 轮博弈中拒绝了条件,他会在第 2 轮中给出条件,那么在第 2 轮中他能够得到1 − 𝛿,所以你需要在第 1 轮给他 (1 −𝛿) 总结: (1)轮流提议的议价过程,在特殊条件下,会得到平均分配,这需要满足三个条件。 ① 可能会出现无穷次议价 ② 𝛿→ 1 可视为无折损 ③ 有相同的折损原因 𝛿1 = 𝛿2 (分析在折损率不同的情况下的结果) (2)快速给出的提议被接受,没有议价环节 回到顶部 第十八讲 不完全信息:信息集和子博弈完美 Imperfect information: information sets and sub-game perfection 信息集合(information set) 《策略与博弈》中以一个椭圆型来表示信息集合,这和数学上所用的表示法是一致的,且更易于理解,但为了作图的方便并和课程保持一致后续仍然会使用虚线。 参与人 2 不能分辨处于信息集合中的两个节点,参与人 2 可以区别参与人 1 是选了上中,还是选了下,但无法区别上或中。 定义:信息集合 参与人 i 的信息集合是一系列参与人 i 无法识别的参与人 i 的节点。 对信息集合的规定: ·参与人 2 可以通过观察选择的数量来判断他所处的节点 ·参与人 1 可以通过第一选择判断他所处的节点 定义:完全信息博弈(perfect information) 这里 Ben 跟很多书一样只讨论完全且完美信息的博弈,也就是说每个 player 对博弈的历史阶段都有完美记忆(perfect recall) 树上所有的信息集合都只包含一个节点的博弈。 不完全信息博弈(imperfect information) 定义:纯策略(pure strategies) 参与人 i 的纯策略是一个完全的行动计划,它告诉参与人 i 在他的每一个信息集合一定要如何行动。 由上面的树形图可以转化为如下矩阵 由上面的矩阵可以转化为如下的树形图 博弈的关键是信息,而不是时序。 参与人 1 的策略:𝑈𝑢,𝑈𝑑,𝐷𝑢,𝐷𝑑 参与人 2 的策略:𝑙,r NE ( 𝑈𝑢,𝑙 ) ( 𝐷𝑢,𝑙 ) ( 𝐷𝑑,𝑟 ) BI ( 𝐷𝑑,𝑟 ) 三人博弈,阐述纳什均衡的问题 NE ( 𝐴,𝑈,𝑙 ) 但这个均衡并不可信 只考虑参与人 2 和参与人 3 的博弈,子博弈——淡绿色部分 在整个博弈中 ( 𝐴,𝑈,𝑙 ) 是一个 NE ,但这个均衡标明在进入子博弈时无法达到 均衡,因此这个均衡是不可信的。 子博弈(sub-game): 子博弈是博弈的一部分,它满足以下三个条件。 ① 子博弈必须从单个节点开始 ② 它包含该节点的所有后代节点 ③它不能破坏任何信息集合 再次使用一下《策略与博弈》中的绘图方式,这种绘图方式对信息集合的表达让人更明白子博弈满足条件的第三条。 淡绿色区域不能成为子博弈是因为它破坏了信息集合——那个白色的椭圆。淡红色区域不能成为子博弈是因为它不是从单个节点开始。 如果 (S1∗, S2∗,… … Sm∗ ) 它们能在任意一个子博弈中达到 NE,那它就是一个子博 弈完美均衡(SPE,sub-game perfect equilibrium)。 子博弈精炼 NE 的一个重要特点是它可以排除不可信的威胁要成为 SPE,本身必须是一个 NE。 SPE (B,D ,r) 回到顶部 第十九讲 子博弈精炼均衡:招商引资和战略投资 Subgame perfect equilibrium: matchmaking and strategic investments 别搞砸了(don't screw up) NE ( 𝑈𝑢,𝑙 ) ( 𝐷𝑢,𝑟 ) ( 𝐷𝑑,𝑟 ) BI ( 𝑈𝑢,𝑙 ) 子博弈精炼(sub-game perfect) 图中淡绿色的子博弈 NE 源自策略的定义,它告诉每个参与人在不同信息集合下应该如何行动,即是有些博弈中信息集合无法获得,策略仍然为参与人在当前状况下提供指示。 根据整体矩阵得出的纳什均衡指示 NE ( 𝑈𝑢, 𝑙 ) ( 𝐷𝑢, 𝑟 ) ( 𝐷𝑑, 𝑟 ) 用子博弈的纳什均衡去符合整体博弈的纳什均衡,排除不符合的部分。子博弈精炼均衡要求,每个子博弈必须满足 NE 图中淡红色的子博弈 NE (u, l)(d,r) 根据整体矩阵得出的纳什均衡指示 NE (Uu,l)(Du ,r)(Dd ,r) ②在子博弈中非 NE 排除 两次排除后剩下的唯一一个子博弈精炼均衡 SPE(Uu,l) ,符合 BI。 介绍人博弈 (Matchmaker game) 大卫倾向加迪斯的《冷战》Dave Gaddis Cold War 妮娜倾向斯宾塞的《中国》Nina Spence China 纯策略的 NE (G,G)(S ,S) 两者都为参与人 1 带来 1 个单位的收益 整体博弈 SPE= (sen ,G ,G )(send,S ,S) 站在参与人 1 的角度 1 的收益相对于 0 是优势策略 性别大战中混合策略的 NE NE [(2/3 ,1/3 ),( 1/3 ,2/3)] 参与人 1 撮合参与人 2 和参与人 3,他们碰面的概率 2/3 × 1/3 + 1/3 × 2/3 = 4/9 参与人 1 撮合时均衡中的预期收益是 4/9 × 1 + 5/9 × (−1) = − 1/9 站在参与人 1 的角度 0 的收益相对于− 1/9 是优势策略,他应该选择不撮合 SPE= (no sen , mi , mi) 博弈共有 3 个 SPE,每个都对应子博弈中的一个 NE 投资案例 原方案是年产 1 百万吨,使用新设备节约0.5$/吨,1 百万吨就是节约 50 万 购置设备 70 万,70 万>50 万,因此不该租用设备。 (2)经济学的答案 Economic 假设自己垄断,那么产量应遵照边际收益等于边际成本——此讨论见第六讲,见下图 红色矩形部分即为会计师的答案,他们忽略了因为成本的降低,厂家会调整自己的产量,即绿色三角形的获利。 三角形面积= 3/16 ≈ 0.19 0.5 + 0.19 = 0.69 < 0.7 使用经济学的算法租用设备的盈利仍然小于设备租用的投资。不应当租用该设备。 (3)博弈论的答案 Game Theory 经济学答案的局限在于仅仅考虑了自身产量的变化。 战略替代 (Strategic Substitutes) 因为 A 公司更新了设备,降低了成本,所以它的最优产量将会产生变化,形成一条新的最优反应虚线,即红色虚线。考虑到 A 公司的产量,B 公司会根据最优反应曲线来调整自己的产量,最终达成新的均衡,即由淡绿色点转变到红色点。 最终结果投资可以带来 31 万的利润,自行验证。 0.69 + 0.31 = 1 > 0.7 应当租用设备。 ①先分析子博弈,找到子博弈的纳什均衡,从子博弈的价值出发回头做决定。 首先解出对称古诺竞争数据,解出新的均衡,回过头来和那要投资的 70 万作比较。 ② 经济学比会计学多考虑了战略效应(strategic effect),但却忽视了其他参与人也会改变行为。 这里必须插入一个问题:关于博弈结果、博弈的均衡与博弈的均衡解博弈的结果等同于博弈的均衡解,但博弈的均衡跟均衡解不同,这里借用一个图说 明,在下图的两阶段博弈中,博弈的均衡解是(R,L'),但博弈的均衡却是(R,(R',L'))因为 NE 均衡是定义在 players 的策略之上,因此博弈的均衡策略要包含完整的计划这一点很重要,Ben 一直讲均衡解,但好像没怎么强调这个不同。 回到顶部 第二十讲 子博弈精炼均衡:消耗战 Subgame perfect equilibrium: wars of attrition 决斗博弈 两个参与人,每个阶段每个参与人可以选择攻击(Fight)或者退出(Quit),同时给出选择,直到一方退出后立即结束。 如果对手退出,我方得到奖励 = 1 如 果 双 方 都 选 择 攻 击,那 么 每 人 付 出 代 价−=−0.75 如 果 双 方 都 选 择 攻 击,那 么 每 人 付 出 代 价−=−0.75 如果双方都选择退出,那么每人获得 0 消耗战(war of attrition)行贿竞赛(bribery contests)全薪拍卖(all pay auction) 第二轮 B 的选择分支上为 f(2),下为 q(2),空间太小,省略了。 延续收益的两个均衡 两个纯策略博弈的完美均衡 求得均匀,却没有完成证明理智的参与者会选择在第一轮攻击如何去寻找一个折损较多的均衡?混合策略下的均衡 延续收益都为 0,即为第二阶段混合策略下的 NE 与子博弈的矩阵完全相同 混合策略下 SPE [(p∗, p∗)( p∗, p∗)] 收益期望为 0 将这个分析方式推广到无限博弈,分析结果也是一致的,在混合策略的 NE 下,延续收益仍然为 0 在消耗战为背景的博弈中,在理性参与人中有个一个均衡,更进一步说是一个合理的常识,即每个人都很理性,也知道其他人也是理性的,但却存在这样一个平衡,使人们不仅选择攻击而且一直攻击下去,在每个阶段他们有可能选择攻击。 随时间推移消耗战持续的可能性下降 回到顶部 第二十一讲 重复博弈:合作与最后一局游戏 Repeated games: cooperation vs. the end game 重复互动(Repeated Interaction) 在一个正在进行的关系中,对于将来奖励的承诺和未来惩罚的威胁,可能会为现在的好行为提供激励。 最后一轮都会背叛,因为没有一个将来的奖励,那么通过逆向归纳在这之前的一轮也会背叛,以此类推自始至终都会背叛。 前面类似例子垄断者威慑试图进入市场者的推演,见第十六讲。 要有一个明确的未来。 重复互动博弈的重点在于明确的未来会为现在的行动提供激励。 连任失败效应(lame duck effect) 两次博弈,收益矩阵如下 在一次性博弈中 (A,A)不是纯策略 NE (B,B)(C,C)是纯策略 NE 在两次博弈中在第二阶段无法持续(A,A) 希望人们在第一阶段达成合作(A,A),考虑如下策略如果选了(A,A)就先选 A 再选 C,如果不是则选 B 从子博弈与第二阶段的联系开始 在(A,A)之后的第二阶段,有一个特别的子集,这个策略会促使(C,C)的发生。第一个阶段的其他选项之后会,会引发(B,B ) A → (A , A) = 4 + ( C, C) = 3 = 7 B → ( B,A ) = 5 + ( B, B) = 1 = 6 第一阶段背叛 ≤ 小于得到奖励的收益减去惩罚的收益背叛的收益在当前,奖励和惩罚在下一阶段,即 (B ,A ) − (A ,A ) ≤ ( B,B ) − (C ,C ) → 5 − 4 ≤ 3 − 1 结论:如果一个重复的阶段博弈,有不止一个 NE,可以通过预测不同策略造成的结果来未下一次行动提供激励,激励可视为奖励或者惩罚。 存在的问题,在第二阶段仍然有动机促使达成收益更高的均衡。 破产(Bankruptcy)担保(Bail Out) 事前和事后的权衡效率讨论 (discussion of trading off ex-ante efficiency and ex-post efficiency) 抛硬币决定何时结束博弈,双正面结束——75%的机会继续。 选 C 合作,如果之前没有选 D 就一直选 C,如果有人选 D 了,就一直选 D。 恐怖和扳机策略(Grim Trigger Strategy) 比较官方的叫法是触发战略,也有译法叫做冷酷战略的 与前面课程博弈的显著不同——无法确定博弈何时结束,没有明确的最后阶段,那么参与人便无法确定在什么时候背叛来赢得最后阶段的更高收益。 检查这种持续合作是否是一个均衡 今天背叛的收益与保持合作的收益差异 ≤ 下一轮保持合作收益与保持背叛收益差额与博弈继续下去概率的乘积 回到顶部 第二十二讲 重复博弈:作弊,惩罚和外包 Repeated games: cheating, punishment, and outsourcing 听这讲时用的圣城的字幕,其他都是 YYeTs 人人影视的字幕,鉴于对前者风格不太熟悉的原因,本讲笔记可能更繁琐一些,为了避免漏掉有用的成分,有重复的部分各位见谅。 权衡良好行为带来的前景,和不良行为招致的损失,从而抑制我们作弊的念头。现在作弊的利益 ≤ 今后合作的利益(承诺 promise) − 今后欺骗的代价(threat) 需要承诺和威胁都真实可信 今天的威胁不可信,因为明天仍然会遵循 NE,那么今天的合作就没了基础保持威胁真实可信的方法是关注 SPE——特点每一个子博弈中都有 NE,利用这个特点来寻找合作机会 这个问题具有重复性,称为 ,每个时期,的可能性在延续,如果可能性是1 − 那么可能博弈每个时期都会结束。 现在作弊的利益 ≤ 今后合作的利益 − 今后欺骗的代价 3 − 2 ≤ 永远合作(C ,C) = 2 −永远背叛(D ,D) = 0 永远合作的收益: 解得: 假设博弈可以进行下去的概率为 p,贴现因子为 1/(1+r),下期可以得到的收益是π,那么如果可以进行到下期那么本期的收益为 pπ/(1+r),再下一期同样分析,那么如果设 P/(1+r)=delta 作为新的贴现因子,那么这个因子就既包括了时间价值又包括了博弈能够继续进行的可能性了,不是不恰当的。 求证恐怖和扳机策略能实现: ⇔⇔ 当满足这个条件时不会选择背叛 验证是否存在有利的策略变更: 先选 D,之后在下一时段选 C,之后永远选 D,结果会如何? 重复道德风险 (Repeated Moral Hazard) 在 Freedonia 不投资,那么收益为 0,代理人只获得基本工资 1(从事其他工作);如果投资,并设定工资为 W,此时代理人,可以选择诚实(Honest)或背叛(Cheat)。如果参与人选择背叛,那么投资损失原材料,代理人获得卖出原材料的收益 1,以及从事其他工作的获得的基本工资 1。 如果代理人选择诚实,那么我方的利润是 4,减去支付给代理人的基本工资 1,投资人的收益为3 − w,代理人的收益为w 假设这是一次性投资,为了生产顺利完成,我要付给代理人多少工资? 运用 BI,若w = 1,那么代理人会选择背叛需要做的是让工资足够高,使得代理人诚实并继续项目,并被判获得的多,需要w ≥ 2 w∗ = 2 如果你担心雇员会有背叛的动机,为了让他们工作,你需要支付巨大的工资溢价, Freedonia 的基本工资是 1,但你需要设定工资等于 2,一个 100%的工资溢价,以让其工作. 重复互动,持续下去的概率为𝛿 在此情况下要付的工资 w∗∗ 今天背叛的诱惑 ≤ [ 继续关系值(继续雇佣) − 终止关系值(解雇) ] The value of continuing the relationship minus the value of ending the relationship 即是关系继续下去的概率相对较小,也会大幅度减少工资溢价 为了在这些持续关系中获得良好行为,必须要在明天提供一定的报酬如果你放到明天的砝码,或者说,如果明天继续下去的概率比较低,那么这个报酬就要比较高 回到顶部 第二十三讲 非对称信息:沉默,信号和教育之苦 Asymmetric information: silence, signaling and suffering education 第一部分 信息能够被证实的情况 古诺模型,两家企业 A 和 B,假设 B 的边际成本 位于高低之间企业 A 的成本有三种情况: 企业 B 只知道自己的成本,而企业 A 知道双方的成本,企业 A 可以选择是否告诉企业 B 自己的成,令企业 B 相信企业 A 的成本无需额外的花销。 关于策略替代,租用设备降低成本,见第十九讲后半节的讨论,会计学、经济学、博弈论 应该是一开始讲古诺模型的时候 既然三种情况的两种的需要曝光,那么剩下也没什么好隐瞒的了。 信息披露的过程 (Informational Unraveling) 重要结论:缺乏信息传达途径,或者说企业不像公布一些信息,这些现象本身也在传达着信息。 传递信号有成本的模型 (Costly Signaling) 优秀雇员 good workers——G——绩效 50——10% 差劲雇员 bad workers——B——绩效 30——90% 企业支付给优秀员工薪水——50;差劲雇员——30对于无法评价的一般员工支付 32(B G 加权平均) 马克·斯彭斯 Mike Spence 成本差异化,假设获得 MBA 学位每一年的成本对于优秀的雇员来说是 5,而对于差劲的雇员来说是 10。学费等价,且假设不存在机会成本,成本的差异体现在付出的精力。 有 MBA 就是好雇员,否则就是差雇员 证明均衡: ① 证明每一类雇员都不愿意改变 ② 证明雇主的想法和均衡行为是一致的 假设每个雇员都只工作一年 G-worker → MBA → 雇主认为该员工是好雇员 → 收益 绩效工资 50−扣除成本 3 × 5(三年,每年 5)总收益为 35 ·作出改变 → 没有 MBA → 雇主认为该员工是差雇员 → 收益 绩效工资30 < 35 B-worker → 没有 MBA → 雇主认为该员工是差雇员 → 收益 绩效工资 30 ·作出改变 → MBA → 雇主认为该员工是好雇员 → 收益 绩效工资50 − 扣除成本 3 × 10(三年,每年 10)总收益为 20< 30 此案例为分离均衡(separating equilibrium) 貌似 Ben 没有时间讲混同均衡 假设取得 MBA 只需要 1 年,带入上面的分析 B-worker 取得 MBA 收益变为 40 优于没 有 MBA 的 30 判断优秀员工与差劲员工需要取得 MBA 时间至少为 2 年 在成本上有足够的差别,是优秀的员工去念 MBA,而差劲的员工不想这么做。 结论: 一个好的信号不一定与很高的成本有关,但是要能通过成本区别不同的类型。 此处模型的缺陷: (1)模型中没有学习的概念(2)教育失去了社会用途,仅仅成为了区别优秀与差劲的工具(3)教育加剧了不平等 回到顶部 第二十四讲 非对称信息:拍卖和获奖者的诅咒 Asymmetric information: auctions and the winner's curse 拍卖(Auction) 公共价值(common values) 私人价值(private values) 被出售商品的价值用[ V ]标记公共价值 私人价值,物品的最终价值对每个人都不同,它完全具有特异性,并且我对它赋予的价值和你是没有关系的 [ ] 油井 住房 蛋糕 拍卖——罐子中硬币 最后胜出的出价要比实际价值高许多 赢家的诅咒(winner's curse) 拍卖中的收益情况 V——罐子中的硬币数-参与人的竞价(最高的出价) 0 人们的估价值 可以它当作真实价值,加偏差值 正态分布图像 获胜者是出价最大值 的参与人 i , 意味着偏差值最大 一般来说最后获胜的出价会比真实价值高很多 首次公开募股(Initial Public Offerings,简称 IPO) 油井的例子 每个公司都在油田里挖一个测试井,从测试井中每个公司都得到一个估值 假设参与人 i 的估值等于yi = 150,当被告知 𝑦j < 𝑦i,对所有 j 都成立 当参与人赢得拍卖时他会发现这个问题,而这会引起参与人的后悔 如果参与人 i 只考虑油井里有多少油,且赢得了拍卖,因此参与人做出的估值 至 少要和其他所有人的估值yj 一样大,即yi ≥ yj 所以出价时的相关价值就是,基于参与人 i 一开始的估价以及这个估价值yi 要比yj 大时, 应该出假设参与人 i 自己是最后的赢家,参与人 i 估计出来的罐子中的硬币数,应该像赢家那样去出价 拍卖形式 A首价密封拍卖机制 First-price Sealed-bid auction A=D B第二价格密封拍卖 Second-price Sealed-bid auction 赢家支付第二高的出价维克瑞拍卖 (Vickrey auction) C公开增价拍卖 (Ascending open auction) D 公开降价拍卖 (Descending open auction) 逐步降价直到有人提出购买荷兰式拍卖 (Dutch auction)D=A B 和 C 不同但密切相关 B≈C,区别不在价格上而是在信号上 私人价值的拍卖 参与人出价 𝐵𝑖 收益: 回到顶部 参考资料 视频: 合群是堕落的开始 优秀的开始是孤行 posted @ 2020-02-16 10:36麦奇 阅读(3304) 评论(1)收藏举报 刷新页面返回顶部 登录后才能查看或发表评论,立即 登录 或者 逛逛 博客园首页 【推荐】100%开源!大型工业跨平台软件C++源码提供,建模,组态! 【推荐】HarmonyOS 专区 —— 闯入鸿蒙:浪漫、理想与「草台班子」 【推荐】2025 HarmonyOS 鸿蒙创新赛正式启动,百万大奖等你挑战 【推荐】天翼云爆款云主机2核2G限时秒杀,28.8元/年起!立即抢购 公告 Live2d Test Env 昵称: 麦奇 园龄: 7年4个月 粉丝: 26 关注: 47 +加关注 <2025年9月> 日 一 二 三 四 五 六 31 1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 1 2 3 4 5 6 7 8 9 10 11 搜索 常用链接 我的随笔 我的评论 我的参与 最新评论 我的标签 最新随笔 1.linux/arm64架构下kubernetes集群的搭建 2.手把手教你搭建实时日志分析平台 3. Travis-CI+Hexo+Serverless+语雀搭建博客 4.博客迁移 5.BeanFactory和FactoryBean 6.[转]30张图带你彻底理解红黑树 7.Flutter 快速入门 8.[转]Kubernetes 搭建大型集群 9.[转]Kubernetes 1.8.x 全手动安装教程 10.[转]Kubernetes从零开始搭建自定义集群 积分与排名 积分 - 218106 排名 - 5519 随笔分类 (174) C/C++(1) Ethereum(1) Golang(6) Hyperledger(4) Java(36) JavaScript(6) JavaWeb(9) JDBC(5) JSP(2) JVM(1) Linux(12) Redis(1) Shell(2) Vue(2) XML(1) 编程思想(2) 工具(1) 其他资料(1) 区块链(4) 软件工程(3) 设计模式(5) 数据结构(1) 数据库(12) 算法(1) 微服务(1) 我的笔记(53) 消息队列(1) 更多 随笔档案 (249) 2022年11月(1) 2021年6月(1) 2021年5月(1) 2021年1月(1) 2020年10月(1) 2020年4月(1) 2020年3月(12) 2020年2月(13) 2020年1月(3) 2019年12月(6) 2019年10月(7) 2019年9月(7) 2019年8月(6) 2019年7月(2) 2019年5月(5) 2019年4月(8) 2019年3月(12) 2019年2月(10) 2019年1月(16) 2018年12月(5) 2018年11月(8) 2018年10月(18) 2018年9月(14) 2018年8月(40) 2018年7月(51) 更多 文章档案 (12) 2020年3月(1) 2020年2月(1) 2020年1月(2) 2019年9月(1) 2019年8月(3) 2019年7月(1) 2019年6月(1) 2019年5月(1) 2018年7月(1) 阅读排行榜 1. 关于java程序在运行时出现a java exception has occured时解决方法(65887) 2. SpringBoot中普通类无法通过@Autowired自动注入Service、dao等bean解决方法(29565) 3. 常用FTP命令汇总(16775) 4. java.sql.SQLException: No suitable driver found for jdbc:mysql://localhost:3306/(15850) 5. Mybatis plus 插入数据时将自动递增的主键手动进行赋值设置(10274) 6. Hyperledger Fabric 踩坑汇总(9544) 7. Node Sass does not yet support your current environment: Linux 64-bit with Unsupported runtime(8840) 8. 关于ubuntu挂载ntfs无法进行读写的解决方法(8520) 9. 关于阿里云ecs服务器无法用FTP进行连接问题(8182) 10. SQL的四种连接(内连接,外连接)(6380) 评论排行榜 1. Microsoft SQL Server on Linux破解 2G内存限制(5) 2. Hyperledger Caliper(2) 3. Hyperledger Fabric 踩坑汇总(2) 4. SpringBoot中普通类无法通过@Autowired自动注入Service、dao等bean解决方法(2) 5. 博弈论(1) 6. gRPC Learning Notes(1) 7. 实现简单ORM案例(1) 8. Docker Learning Notes(1) 9. java.io.IOException: java.io.FileNotFoundException: /tmp/tomcat.2457258178644046891.8080/work/Tomcat/localhost/innovate-admin/C:/up/154884318438733213952/sys-error.log (没有那个文件或目录)(1) 10. Node Sass does not yet support your current environment: Linux 64-bit with Unsupported runtime(1) 推荐排行榜 1. java动态代理中的invoke方法是如何被自动调用的(6) 2. SpringBoot中普通类无法通过@Autowired自动注入Service、dao等bean解决方法(3) 3. Java Juc学习笔记(1) 4. WebService-CXF 学习笔记(1) 5. Node Sass does not yet support your current environment: Linux 64-bit with Unsupported runtime(1) 最新评论 1. Re:博弈论 请问有习题集资源吗,现在的人人网官网进不去了 --Raindom_Butterfly 2. Re:Hyperledger Fabric 踩坑汇总 docker rm -f $(docker ps -a | grep "hyperledger/" | awk "{print $1}")为什么说这个格式不正确啊... --掉头发的木木 3. Re:Linux(Centos)安装图形化界面步骤 Warning: Module or Group 'X Window System' does not exist. Error: Nothing to do. --WaterBird 4. Re:Hyperledger Caliper 大佬好 我安装时老报错 EHOIST_PKG_VERSION "@hyperledger/caliper-ethereum" package depends on web3@1.3.0, which ... --sheng0302 5. Re:Microsoft SQL Server on Linux破解 2G内存限制 2G Memory都没有,还装什么sqlserver呀? --westsoft 6. Re:SpringBoot中普通类无法通过@Autowired自动注入Service、dao等bean解决方法 这... --君君的BigHeadDaddy 7. Re:Hyperledger Caliper 大佬能留个联系方式吗 --都可以额 8. Re:Node Sass does not yet support your current environment: Linux 64-bit with Unsupported runtime npm uninstall --save node-sass npm install --save node-sass 成功地解决了我vott安装错误的问题,谢谢博主大大~... --supreme大可爱 9. Re:java.io.IOException: java.io.FileNotFoundException: /tmp/tomcat.2457258178644046891.8080/work/Tomcat/localhost/innovate-admin/C:/up/154884318438733213952/sys-error.log (没有那个文件或目录) 具体原因看这个,还有解决方案 --caibixyy 10. Re:Microsoft SQL Server on Linux破解 2G内存限制 就算解除限制能安装,2G内存还是卡到天际的,所以没啥意义吧 --bigcan 博客园© 2004-2025 浙公网安备 33010602011771号浙ICP备2021040463号-3 点击右上角即可分享
189373
https://www.awesomemath.org/wp-pdf-files/math-reflections/mr-2021-06/mr_6_2021_tangential_quadrilaterals_cyclicity.pdf
Tangential Quadrilaterals and Cyclicity Dr. Suzy Manuela Prajea J.L.Chambers High School, Charlotte, NC, USA Abstract In the paper , N. Minculete proved some beautiful properties of tangential quadrilaterals using trigonometric computations. This paper will ease the role of trigonometry by provid-ing new techniques based more on pure geometric considerations. These techniques will help further to deduce some characterizations for tangential cyclic quadrilaterals. From a didac-tic perspective, the content becomes in this way accessible to a larger variety of high school students who intend to improve their mathematical education in the field of geometry to re-veal fascinating characterizations of some outstanding geometric configurations at a reasonable interference with basic algebra and trigonometry. 1 Notations For the simplicity of the writing will denote by ∠ABC both the angle and the measure of the angle ∠ABC. The difference will be deducted from the context of the usage of this notation.The distance from a point P to a line AB will be denoted by dAB and the area of the triangle (ABC) will be denoted shortly by (ABC). To easiness the reading, the wording will prevail over the abstract formal mathematical notations when possible. 2 Tangential Quadrilaterals Characterizations Definition 2.1. A convex quadrilateral is called tangential if there is a circle tangent to the sides of the quadrilateral (incircle). 1 Tangential Quadrilaterals and Ciclicity Theorem 1 (Newton). If ABCD is a tangential quadrilateral and X, Y, Z, T are the tangency points of the incircle with the sides AB, BC, CD, DA then the lines AC, BD, XZ, YT are concurrent. Proof. Denote by a, b, c, d the lengths of the tangents from the vertices A, B, C, D to the incircle and by P, P ′ the intersection points of the diagonal AC with the lines XZ, Y T respectively. Notice that ∠PZC = ∠PXB (subtend the same arc) and ∠ZPC = ∠APX as vertical angles. Denote these angles with α and β respectively. Law of sines in △PCZ and △PXA provides PC sin α = c sin β and PA sin (1800 −α) = a sin β . Dividing these relationships it follows PA PC = a c . Analogously we get P ′A P ′C = a c . In conclusion we have PA PC = P ′A P ′C Finally, due to the fact that both points P and P ′ are on the segment AC and split it in the same ratio it follows that the points P and P ′ are identical. Therefore the lines AC, BD, XZ, and Y T are concurrent. Corollary 1.1. Let be ABCD a tangential quadrilateral, X, Y, Z, T the tangency points of the incircle with the sides AB, BC, CD, DA, and P the intersection point of the lines AC, BD, XY, ZT. The length of the tangents from A, B, C, D to the incircle are denoted by a, b, c, d. Then there are u, v positive real numbers such that PA = au, PC = cu, PB = bv, PD = dv Proof. It follows immediately from PA PC = a c and PB PD = b d. MATHEMATICAL REFLECTIONS 6 (2021) 2 Tangential Quadrilaterals and Ciclicity Theorem 2. Let be ABCD a convex quadrilateral. The following conditions are equivalent: (i) ABCD tangential (ii) AB + CD = AD + BC (iii) 1 dAB + 1 dCD = 1 dBC + 1 dAD (iv) a sin A sin B + c sin C sin D = b sin BsinC + d sin D sin A Proof. (i) ⇒(ii) As in the proof of Theorem 1 it follows AB = a + b, CD = c + d and also BC = b+c, AD = a+d. In consequence, AB+CD = (a+b)+(c+d), BC+AD = (b+c)+(a+d) and therefore AB + CD = BC + AD. Proof. (ii)⇒(i) Two cases will be considered for this proof. 1. The pairs of opposite sides of ABCD are parallel. It results ABCD parallelogram so the oppo-site sides are equal. From (ii) it follows that the sums of the opposite sides are also equal therefore all the sides are equal so ABCD is a rhombus. It is known that the center of the rhombus is equal distanced from its sides so there is a circle centered in the intersection of diagonals that is tangent to the sides of the rhombus (the radius is the common distance of the center to the rhombus sides) so ABCD is tangential. 2.There is a pair of opposite sides of ABCD that are not parallel, for instance AB and CD are not parallel. Denote by S their intersection point. MATHEMATICAL REFLECTIONS 6 (2021) 3 Tangential Quadrilaterals and Ciclicity WLOG, assume the point A between S and B. Then D is between S ans C (ABCD is convex). Assume by contradiction that ABCD is not tangential. Define Q as the intersection points of angle bisectors from A and D so the point Q will be equal distanced from the sides AB, AD, CD and therefore the sides AB, AD, CD will be tangent to the circle centered in Q that has the radius the common distance from Q to the sides AB, AD, CD. There are two possible situations: BC to be external to the circle or BC to be secant to the circle. In the case that BC is external to the circle (the case BC secant can be proved in the same way), consider the parallel to the line BC that is tangent to the circle and denote by B′ and C′ the intersections with the sides AB and CD respectively. Due to the fact that BC is external to the circle, it follows that B′ is on the side AB and C′ on the side DC.The circle becomes incircle for the quadrilateral AB′C′D so due to the fact that the tangents from A, B′, C′, D are equal it results that AB′ + C′D = AD + B′C′ (the same argument as in (i)⇒(ii)). From (ii) it holds also AB + CD = AD + BC. Subtracting the last two equations it follows BB′ + CC′ = BC −B′C′ or equivalently BB′ + CC′ + B′C′ = BC. The last equation is a contradiction because BC < BB′ + CC′ + B′C′ (the length of the segment BC is less than the length of any polygonal line from B to C). In consequence, ABCD is tangential. Proof. (ii)⇒(iii) Denote ∠CPD = γ and area of △UV W by (UV W). Then (APB) = (a + b)dAB 2 and from Corollary 1.1 it follows (APB) = (au)(bv) sin γ 2 . The last two equations help to conclude 1 dAB = a + b abuv sin γ A similar equation stands for CD 1 dCD = c + d cduv sin γ Adding the last two equations, it results 1 dAB + 1 dCD = 1 uv sin γ a + b ab + c + d cd  = 1 uv sin γ 1 a + 1 b + 1 c + 1 d  Similarly, the following holds 1 dBC + 1 dAD = 1 uv sin γ 1 a + 1 b + 1 c + 1 d  MATHEMATICAL REFLECTIONS 6 (2021) 4 Tangential Quadrilaterals and Ciclicity and hence it results the conclusion 1 dAB + 1 dCD = 1 dBC + 1 dAD Proof. (iii)⇒(iv) Lemma 2.1. Consider △ABC and denote BC = a, CA = b, AB = c.Then dBC = c sin B. Proof. This can be easily justified (regarding the ∠B is acute, right or obtuse) by applying the sin B ratio in △ABD, where D is the foot of the perpendicular from A to BC. If A′, B′, C′, D′ are the feet of the perpendiculars from P to A′, B′, C′, D′ then (iii) becomes 1 PA′ + 1 PC′ = 1 PB′ + 1 PD′ or equivalently 1 + PA′ PC′ = PA′ PB′ + PA′ PD′ (1) Denote a, b, c, d the lengths of the sides AB, BC, CD, DA, by A′, B′, C′, D′ the feet of the per-pendiculars from P to the sides AB, BC, CD, DA and by A′′, B′′ the feet of the perpendicu-lars from D to the sides AB, BC. According to Lemma 2.1, DA′′ = d sin A, DB′′ = c sin C. From △BPA′ ∼△BDA′′ and △BPB′ ∼△BDB′′ it follows PA′ DA′′ = BP BD = PB′ DB′′ and then PA′ PB′ = DA′′ DB′′ or PA′ PB′ = d sin A c sin C (2) MATHEMATICAL REFLECTIONS 6 (2021) 5 Tangential Quadrilaterals and Ciclicity Similarly, PA′ PD′ = b sin B c sin D (3) Analogously holds PD′ PC′ = a sin A b sin C . Multiplying the last two equations it follows PA′ PC′ = a sin A sin B c sin C sin D (4) Finally, replacing the ratios obtained in (2), (3) and (4) in the equation (1) it results 1 + a sin A sin B c sin C sin D = d sin A c sin C + b sin B c sin D Multiplying on both sides with c sin C sin D it follows immediately that c sin C sin D + a sin A sin B = d sin D sin A + b sin B sin C so (iv) is proved. Proof. (iv)⇒(i) Two cases will be considered for this proof. 1. AB ∥CD. Assume by contradiction that ABCD is not tangential (for proof will use a similar idea as in (ii)⇒(i)). Define Q as the intersection points of angle bisectors from A and D so the point Q will be equal distanced from the sides AB, AD, CD and therefore the sides AB, AD, CD will be tangent to the circle centered in Q that has the radius the common distance from Q to the sides AB, AD, CD. There are two possible situations: BC to be external to the circle or BC to be secant to the circle. In the case that BC is external to the circle (the case BC secant can be proved in the same way), consider the parallel to the line BC that is tangent to the circle and denote by B′ and C′ the intersections with the sides AB and CD respectively. Due to the fact that BC is external to the circle, it follows that B′ is on the side AB and C′ on the side DC (5).The circle becomes incircle for the quadrilateral AB′C′D so due to the equivalence (i) ⇐ ⇒(ii) and the implications (ii) ⇒(iii)⇒(iv) proved above, the following equation holds for AB′C′D a′ sin A sin B′ + c′ sin C′ sin D = b′ sin B′ sin C′ + d sin D sin A (6) where a′, b′, c′, d′ are the sides’ lenghts of the quadrilateral AB′C′D. For the quadrilateral ABCD, according to (iv) it holds also the equation a sin A sin B + c sin C sin D = b sin B sin C + d sin D sin A (7) MATHEMATICAL REFLECTIONS 6 (2021) 6 Tangential Quadrilaterals and Ciclicity From B′C′ ∥BC it follows ∠B′ = ∠B, ∠C′ = ∠C. Subtracting the equations (6) and (7) it results (a −a′) sin A sin B + (c −c′) sin C sin D = (b −b′) sin D sin A (8) From (5) it follows that a > a′, c > c′ and from AB ∥CD and B′C′ ∥BC it results B′BCC′ parallelogram so b = b′. Therefore the left hand side in (8) is positive while the right hand side is negative. This is a contradiction so ABCD is tangential. 2. AB ∦CD. Denote S the intersection point of the lines AB and CD. WLOG, assume the point A lies between S and B. Then D is between S ans C (ABCD is convex). Assume by contradiction that ABCD is not tangential (approximately the same idea as in(ii)⇒(i), strategically adapted). Define Q as the intersection point of angle bisectors from B and C so the point Q is equal distanced from the sides AB, BC, CD. Therefore the sides AB, BC, CD are tan-gent to the circle centered in Q that has the radius the common distance from Q to the sides AB, BC, CD. There are two possible situations: AD external to the circle or AD secant to the circle. In the case that AD is external to the circle (the case AD secant can be proved similarly), consider the parallel to the line AD that is tangent to the circle and denote by A′ and D′ the intersections with the sides AB and CD respectively. Due to the fact that AD is external to the circle, it follows that A′ is on the side AB and D′ on the side CD.The circle becomes incircle for the quadrilateral A′BCD′ so the following equation holds (due to the equivalence (i) ⇐ ⇒(ii) and the implications (ii) ⇒(iii)⇒(iv) proved above) a′ sin A′ sin B + c′ sin C sin D′ = b sin B sin C + d′ sin D′ sin A (9) For ABCD stands also a similar equation (from (iv)) a sin A sin B + c sin C sin D = b sin B sin C + d sin D sin A (10) From A′D′ ∥AD it results ∠A′ = ∠A, ∠D′ = ∠D. Subtracting (10) and (9) it follows (a −a′) sin A sin B + (c −c′) sin C sin D = (d −d′) sin D sin A (11) Because a > a′, c > c′, d < d′ (see the figure and the explanations provided) it follows that the left hand side of the equation (11) is positive while the right hand side is negative and hence contradiction. In conclusion, ABCD is tangential. MATHEMATICAL REFLECTIONS 6 (2021) 7 Tangential Quadrilaterals and Ciclicity 3 Tangential Quadrilaterals Properties Proposition 3.1. If ABCD is a tangential quadrilateral and P is the intersection of the diagonals then AB · PC · PD + CD · PA · PB = BC · PD · PA + DA · PB · PC Proof. Denote AB = a, BC = b, CD = c, DA = b, PA = x, PB = y, PC = z, PD = t, ∠APB = α. Consider A′, B′, C′, D′ the feet of the perpendiculars from P to the lines AB, BC, CD, DA. From (PAB) = xy sin α 2 and (PAB) = aPA′ 2 it follows aPA′ = xy sin α or equivalently 1 PA′ = a xy sin α. Similarly, 1 PC′ = c tz sin α, 1 PB′ = b yz sin α, 1 PD′ = d xt sin α. Replacing in the equation from Theorem 2 (iii) it results a xy sin α + c tz sin α = b yz sin α + d xt sin α or equivalently atz + cxy = bxt + dyz Lemma 2.2 (Pascal). Let ABCDEF be a hexagon (possible self-intersecting and possible degener-ate) inscribed in a circle and P, Q, R the points of intersection of the opposite pairs of sides/diagonals in hexagon (AB, DE), (BC, EF), (CD, FA). Then P, Q, R are collinear. Proposition 3.2. Let ABCD be a tangential quadrilaterals and X, Y, Z, T the tangency points of the incircle with the sides AB, BC, CD, DA. Denote by E, F, U, V the intersection points of the pairs of lines (AB, CD), (AD, BC), (XT, Y Z), (XY, TZ). Then E, F, U, V are collinear. Proof. Consider the (degenerate) hexagon TXXY ZZ inscribed in the incircle and the opposite pairs of diagonals (TX, Y Z) intersecting at U,then (XX, ZZ) intersecting at E, and (XY, ZT) intersecting at V . MATHEMATICAL REFLECTIONS 6 (2021) 8 Tangential Quadrilaterals and Ciclicity From Pascal’s Theorem it follows U, E, V collinear. (12) Similarly, the (degenerate) hexagon TTXY Y Z inscribed in the incircle has the opposite pairs of diagonals (TT, Y Y ), (TX, Y Z), (XY, ZT) intersecting at F, U, V . From Pascal’s Theorem it follows F, U, V collinear. (13) Finally, from (12) and (13) it follows that the points E, F belongs to the line UV hence the points E, F, U, V are collinear. 4 Tangential Quadrilaterals and Cyclicity Definition 4.1. A quadrilateral is called cyclic if there is a circle that passes through its vertices. Proposition 4.1. Let ABCD be a tangential cyclic quadrilateral and P the intersection of the diagonals. Then the feet of the angle bisectors of ∠PAB, ∠PBC, ∠PCD, ∠PDA to the sides AB, BC, CD, DA are exactly the tangency points of the incircle. Proof. Consider the notations and results from Theorems 1 and 2: AB = AT = a, BX = BY = b, CY = CZ = c, DZ = DT = d and PA = au, PC = cu, PB = bv, PD = dv. From ABCD cyclic it follows ∠PAB = ∠PDC. Also ∠AXP and ∠DZP are half of the arc XTZ so ∠AXP = ∠DZP. In consequence, from △PAX and △PDZ (because the sum of the angles are 1800 in each triangle) it results ∠APX = ∠DPZ but ∠DPZ = ∠BPX as vertical angles so finally ∠APX = ∠BPX. Hence the ray (PX is the angle bisector of ∠APB. Analogously, the rays (PY, (PZ, (PT are the angle bisectors of ∠BPC, ∠CPD, ∠DPA. Theorem 3. Let ABCD be a tangential quadrilateral and X, Y, Z, T the tangency points of the incircle to the sides AB, BC, CD, DA. Then ABCD is cyclic iffAX · CZ = BY · DT. MATHEMATICAL REFLECTIONS 6 (2021) 9 Tangential Quadrilaterals and Ciclicity Proof. (⇒) Consider ABCD cyclic. It is required to prove AX · CZ = BY · DT. Denote PA = a, PB = b, PC = c, PD = d. From Proposition 4.1 it follows that (PX is the angle bisector of ∠PAB. The angle bisector theorem in △PAB implies AX BX = AP BP or AX BX = a b . Hence there exists a positive number x s.t. AX = ax, BX = bx. Similarly there are y, z, t positive numbers s.t. BY = by, CY = cy, CZ = cz, DZ = dz, AT = at, DT = dt. From the cyclicity of ABCD it follows ∠PAB = ∠PDC and from (PX, (PZ angle bisectors of the congruent angles ∠APB and ∠CPD it results the similarity of the triangles △PBX and △PCZ. Hence PB PC = BX CZ or b c = bx cz that heads to x = z. Analogously it results y = t. From ABCD tangential it follows AB + CD = BC + AD (Theorem 2) so it results (ax + bx) + (cz + dz) = (by + cy) + (dt + at) Also x = z and y = t heads to (a+b+c+d)x = (b+c+d+a)y and then x = y. In consequence x = y = z = t. Finally AX · CZ = (ax)(cx) = acx2 and BY · DT = (bx)(dx) = bdx2. The cyclicity of ABCD involves ac = bd (Ptolemy’s Theorem) and therefore it results immediately AX · CZ = BY · DT. (⇐) Consider AX · CZ = BY · DT. It is required to prove that ABCD is cyclic. Denote a, b, c, d the length of tangents from A, B, C, D to the incircle, X, Y, Z, T the tangents points of the incircle to the sides AB, BC, CD, DA and P the intersection of the diagonals. From Corollary 1.1 it results PA = au, PC = cu, PB = bv, PD = dv where u, v positive numbers. MATHEMATICAL REFLECTIONS 6 (2021) 10 Tangential Quadrilaterals and Ciclicity Denote ∠APX = α, ∠BPX = β, ∠BPY = γ, ∠CPY = δ. From AX · CZ = BY · DT it follows ac = bd or a b = d c . (14) In the following, areas formulae for (APX) and (BPX) will be manipulated strategically several times. Starting with (APX) = a · d(P, AB) 2 , (BPX) = b · d(P, AB) 2 it follows (APX) (BPX) = a b (15) then continue with (APX) = au(sin α)PX 2 , (BPX) = bv(sin β)PX 2 it follows au(sin α)PX bv(sin β)PX = a b (16) or equivalently to u sin α = v sin β (17) Denote ∠PXB = ∠PZC = ω it follows ∠PXA = ∠PZD = 1800 −ω. From the △PAX it results ∠PAX = ω −α and from the △PBX it results ∠PBX = 1800 −(α + β). Finally (APX) = a(au) sin (ω −α) 2 , (BPX) = b(bv) sin (ω + β) 2 provides a2u sin(ω −α) b2v sin(ω + β) = a b (18) or equivalently MATHEMATICAL REFLECTIONS 6 (2021) 11 Tangential Quadrilaterals and Ciclicity au sin(ω −α) = bv sin(ω + β) (19) Analogously, using the same areas-based techniques for △CPZ and △DPZ it follows dv sin(ω −β) = cu sin(ω + α) (20) Multiplying (19) and (20 and using (14) it results after simplifications u2 sin(ω + α) sin(ω −α) = v2 sin(ω + β) sin(ω −β) (21) Using on both sides of the equation the trig formula sin x sin y = 1 2(cos(x −y) −cos(x + y)) it results u2(cos 2α −cos 2ω) = v2(cos 2β −cos 2ω) Applying the double-angle formula cos 2x = 1 −2 sin2 x it follows u2(1 −2 sin2 α −cos 2ω) = v2(1 −2 sin2 β −cos 2ω) Using (17) and it follows u2(1 −cos 2ω) = v2(1 −cos 2ω) Due to the fact that 00 < ω < 1800 it follows that cos 2ω ̸= 1. The previous equation implies u2 = v2 and finally u = v (22) Back to the length of the segments PA, PB, PC, PD used at the beginning of the sufficiency proof (⇐), from u = v it results PA = au, PB = bu, PC = cu, PD = du and because ac = bd it results immediately PA · PC = PB · PD i.e. ABCD cyclic. Note for the sufficiency proof (⇐) : from (17) and (22) it follows also sin α = sin β and due to the fact that α, β are in the interval (00, 1800) and 0 < α + β < 1800 it results α = β. Analogously can be obtained γ = δ. In consequence, the rays (PX, (PY, (PZ, (PT are the angle bisectors of ∠APB, ∠BPC, ∠CPD, ∠DPA or equivalently XZ ⊥Y T. (23) Theorem 4. Let ABCD be a tangential quadrilateral and X, Y, Z, T the tangency points of the incircle to the sides AB, BC, CD, DA. Then ABCD is cyclic iffXZ ⊥Y T. Proof. It follows immediately from Theorem 3 and Note (23). Proposition 4.2. Let be ABCD a cyclic quadrilateral, P the intersection of the diagonals and X, Y, Z, T the feet of the angle bisectors of ∠PAB, ∠PBC, ∠PCD, ∠PDA. If XY ZT is cyclic then ABCD is tangential. MATHEMATICAL REFLECTIONS 6 (2021) 12 Tangential Quadrilaterals and Ciclicity Proof. Denote PA = a, PB = b, PC = c, PD = d. From the angle bisector theorem in △PAB it results AX BX = a b . Hence there is a positive number x s.t. AX = ax, BX = bx. Analogously, there are y, z, t positive numbers s.t. BY = by, CY = cy, CZ = cz, DZ = dz, DT = dt, AT = at. Denote ∠APX = ∠BPX = α, ∠BPY = ∠CPY = β. Notice that △PBX and △PCZ have two pairs of congruent angles so △PBX ∼△PCZ. It results PB PC = BX CZ or b c = bx cz so x = z. Similarly, y = t. (24) From XY ZT cyclic it follows PX · PZ = PY · PT and hence PX2 · PZ2 = PY 2 · PT 2 (25) From the angle bisector length theorem in △PAB it results PX2 = PA · PB(PA + PB + AB)(PA + PB −AB) AB2 or equivalently PX2 = ab(a + b + (a + b)x)(a + b −(a + b)x) (a + b)2 = ab(a + b)2(1 −x2) (a + b)2 = ab(1 −x2) (26) Analogously PZ2 = cd(1 −z2), PY 2 = bc(1 −y2), PT 2 = ad(1 −t2). Using (24) it results PZ2 = cd(1 −x2), PY 2 = bc(1 −y2), PT 2 = ad(1 −y2) (27) Using (26) and (27) in the equation (25) it follows abcd(1 −x2)2 = abcd(1 −y2)2 or equivalently |1 −x2| = |1 −y2| (28) newline Notice that the triangle inequality PA + PB > AB leads to a + b > (a + b)x or x < 1. Similarly y < 1 and hence (28) becomes 1 −x2 = 1 −y2 so x = y and from (24) it results x = y = z = t. Finally AB + CD = (a + b + c + d)x and AD + BC = (a + b + c + d)x so AB + CD = AD + BC. MATHEMATICAL REFLECTIONS 6 (2021) 13 Tangential Quadrilaterals and Ciclicity From Theorem 2 it results that the quadrilateral ABCD is tangential. More than this, it results also AX = AT, BX = BY, CY = CZ, DZ = DT so X, Y, Z, T are the contact points of the incircle with the quadrilateralABCD. Proposition 4.3. Let be ABCD a cyclic quadrilateral, P the intersection of the diagonals and X, Y, Z, T the feet of the angle bisectors of ∠PAB, ∠PBC, ∠PCD, ∠PDA. Then XY ZT is tan-gential if and only if ABCD is trapezoid. Proof. Denote PA = a, PB = b, PC = c, PD = d. As in the proof of the Proposition 4.2 it results AX = AT = ax, BX = BY = bz, CY = CZ = cx, DZ = DT = dx. Because the rays (PX, PY, PZ, PT are the angle bisectors of the ∠APB, ∠BPC, ∠CPD, ∠DPA it results immediately XZ ⊥TY . From Proposition 4.2 it follows that PX = p ab(1 −x2), PZ = p cd(1 −x2), PY = p bc(1 −x2), PT = p ad(1 −x2) The Pythagorean Theorem in △XPY delivers XY = p (1 −x2)b(a + c) Similarly TZ = p (1 −x2)d(a + c), TX = p (1 −x2)a(b + d), Y Z = p (1 −x2)c(b + d) Hence ABCD tangential ⇐ ⇒XY + TZ = XT + ZY ⇐ ⇒ p 1 −x2( p b(a + c) + p d(a + c)) = p 1 −x2( p a(b + d) + p c(b + d)) ⇐ ⇒(because 0 < x < 1 as in the proof of Proposition 4.2) p b(a + c) + p d(a + c) = p a(b + d) + p c(b + d) ⇐ ⇒ (a + c)(b + d + 2 √ bd) = (b + d)(a + c + 2√ac) ⇐ ⇒ (a + c) √ bd = (b + d)√ac MATHEMATICAL REFLECTIONS 6 (2021) 14 Tangential Quadrilaterals and Ciclicity From Theorem 3 it results ac = bd so the previous equation is equivalent with a + c = b + d Denoting a + c = b + d = S and ac = bd = P it follows that a, c and b, d are the solutions of the same quadratic equation x2 −Sx + P = 0 so there are two possible situations: (i) a = b, c = d (ii) a = d, b = c Notice that (i) ⇐ ⇒AB ∥CD and (ii) ⇐ ⇒AD ∥BC. Therefore, a + c = b + d ⇐ ⇒ABCD trapezoid. As a remark, note that the ABCD is an isosceles trapezoid due to the fact that it is cyclic. References Andreescu, T. and Enescu, B., Mathematical Olympiad treasures, Birkh¨ auser, Boston (2006). Minculete, N., Characterizations of a tangential quadrilateral, Forum Geom. 9 (2009) pp. 113–118. Josefsson, M., Similar metric characterizations of tangential and extangential quadrilaterals, Forum Geom. 12 (2012) Josefsson, M., Angle and circle characterizations of tangential quadrilaterals, Forum Geom. 14 (2014) Dr. Suzy Manuela Prajea, J.L. Chambers High School, Charlotte, North Carolina, USA prajeamanuela2012@gmail.com MATHEMATICAL REFLECTIONS 6 (2021) 15
189374
https://chem.libretexts.org/Bookshelves/Organic_Chemistry/Map%3A_Organic_Chemistry_(Bruice)/14%3A_NMR_Spectroscopy/14.23%3A_X-Ray_Crystallography
Skip to main content 14.23: X-Ray Crystallography Last updated : Aug 21, 2014 Save as PDF 14.22: NMR Used in Medicine is Called Magnetic Resonance Imaging 15: Aromaticity (Reactions of Benzene) Page ID : 16369 ( \newcommand{\kernel}{\mathrm{null}\,}) X-ray Crystallography is a scientific method used to determine the arrangement of atoms of a crystalline solid in three dimensional space. This technique takes advantage of the interatomic spacing of most crystalline solids by employing them as a diffraction gradient for x-ray light, which has wavelengths on the order of 1 angstrom (10-8 cm). Introduction In 1895, Wilhelm Rontgen discovered x- rays. The nature of x- rays, whether they were particles or electromagnetic radiation, was a topic of debate until 1912. If the wave idea was correct, researchers knew that the wavelength of this light would need to be on the order of 1 Angstrom (A) (10-8 cm). Diffraction and measurement of such small wavelengths would require a gradient with spacing on the same order of magnitude as the light. In 1912, Max von Laue, at the University of Munich in Germany, postulated that atoms in a crystal lattice had a regular, periodic structure with interatomic distances on the order of 1 A. Without having any evidence to support his claim on the periodic arrangements of atoms in a lattice, he further postulated that the crystalline structure can be used to diffract x-rays, much like a gradient in an infrared spectrometer can diffract infrared light. His postulate was based on the following assumptions: the atomic lattice of a crystal is periodic, x- rays are electromagnetic radiation, and the interatomic distance of a crystal are on the same order of magnitude as x- ray light. Laue's predictions were confirmed when two researchers: Friedrich and Knipping, successfully photographed the diffraction pattern associated with the x-ray radiation of crystalline CuSO4⋅5H2O. The science of x-ray crystallography was born. The arrangement of the atoms needs to be in an ordered, periodic structure in order for them to diffract the x-ray beams. A series of mathematical calculations is then used to produce a diffraction pattern that is characteristic to the particular arrangement of atoms in that crystal. X-ray crystallography remains to this day the primary tool used by researchers in characterizing the structure and bonding of organometallic compounds. Diffraction Diffraction is a phenomena that occurs when light encounters an obstacle. The waves of light can either bend around the obstacle, or in the case of a slit, can travel through the slits. The resulting diffraction pattern will show areas of constructive interference, where two waves interact in phase, and destructive interference, where two waves interact out of phase. Calculation of the phase difference can be explained by examining Figure 1 below. In the figure below, two parallel waves, BD and AH are striking a gradient at an angle θo. The incident wave BD travels farther than AH by a distance of CD before reaching the gradient. The scattered wave (depicted below the gradient) HF, travels father than the scattered wave DE by a distance of HG. So the total path difference between path AHGF and BCDE is CD - HG. To observe a wave of high intensity (one created through constructive interference), the difference CD - HG must equal to an integer number of wavelengths to be observed at the angle psi, CD−HG=nλ, where λ is the wavelength of the light. Applying some basic trigonometric properties, the following two equations can be shown about the lines: CD=xcos(θo) and HG=xcos(θ) where x is the distance between the points where the diffraction repeats. Combining the two equations, x(cosθo−cosθ)=nλ Bragg's Law Diffraction of an x-ray beam, occurs when the light interacts with the electron cloud surrounding the atoms of the crystalline solid. Due to the periodic crystalline structure of a solid, it is possible to describe it as a series of planes with an equal interplaner distance. As an x-ray's beam hits the surface of the crystal at an angle ?, some of the light will be diffracted at that same angle away from the solid (Figure 2). The remainder of the light will travel into the crystal and some of that light will interact with the second plane of atoms. Some of the light will be diffracted at an angle theta, and the remainder will travel deeper into the solid. This process will repeat for the many planes in the crystal. The x-ray beams travel different pathlengths before hitting the various planes of the crystal, so after diffraction, the beams will interact constructively only if the path length difference is equal to an integer number of wavelengths (just like in the normal diffraction case above). In the figure below, the difference in path lengths of the beam striking the first plane and the beam striking the second plane is equal to BG + GF. So, the two diffracted beams will constructively interfere (be in phase) only if BG+GF=nλ. Basic trigonometry will tell us that the two segments are equal to one another with the interplaner distance times the sine of the angle θ. So we get: BG=BC=dsinθ(14.23.1) Thus, 2dsinθ=nλ(14.23.2) This equation is known as Bragg's Law, named after W. H. Bragg and his son, W. L. Bragg; who discovered this geometric relationship in 1912. {C}{C}Bragg's Law relates the distance between two planes in a crystal and the angle of reflection to the x-ray wavelength. The x-rays that are diffracted off the crystal have to be in-phase in order to signal. Only certain angles that satisfy the following condition will register: sinθ=nλ2d(14.23.3) For historical reasons, the resulting diffraction spectrum is represented as intensity vs. 2θ. Instrument Components The main components of an x-ray instrument are similar to those of many optical spectroscopic instruments. These include a source, a device to select and restrict the wavelengths used for measurement, a holder for the sample, a detector, and a signal converter and readout. However, for x-ray diffraction; only a source, sample holder, and signal converter/readout are required. The Source x-ray tubes provides a means for generating x-ray radiation in most analytical instruments. An evacuated tube houses a tungsten filament which acts as a cathode opposite to a much larger, water cooled anode made of copper with a metal plate on it. The metal plate can be made of any of the following metals: chromium, tungsten, copper, rhodium, silver, cobalt, and iron. A high voltage is passed through the filament and high energy electrons are produced. The machine needs some way of controlling the intensity and wavelength of the resulting light. The intensity of the light can be controlled by adjusting the amount of current passing through the filament; essentially acting as a temperature control. The wavelength of the light is controlled by setting the proper accelerating voltage of the electrons. The voltage placed across the system will determine the energy of the electrons traveling towards the anode. X-rays are produced when the electrons hit the target metal. Because the energy of light is inversely proportional to wavelength (E=hc=h(1/λ), controlling the energy, controls the wavelength of the x-ray beam. X-ray Filter Monochromators and filters are used to produce monochromatic x-ray light. This narrow wavelength range is essential for diffraction calculations. For instance, a zirconium filter can be used to cut out unwanted wavelengths from a molybdenum metal target (see figure 4). The molybdenum target will produce x-rays with two wavelengths. A zirconium filter can be used to absorb the unwanted emission with wavelength Kβ, while allowing the desired wavelength, Kα to pass through. Needle Sample Holder The sample holder for an x-ray diffraction unit is simply a needle that holds the crystal in place while the x-ray diffractometer takes readings. Signal Converter In x-ray diffraction, the detector is a transducer that counts the number of photons that collide into it. This photon counter gives a digital readout in number of photons per unit time. Below is a figure of a typical x-ray diffraction unit with all of the parts labeled. Fourier Transform In mathematics, a Fourier transform is an operation that converts one real function into another. In the case of FTIR, a Fourier transform is applied to a function in the time domain to convert it into the frequency domain. One way of thinking about this is to draw the example of music by writing it down on a sheet of paper. Each note is in a so-called "sheet" domain. These same notes can also be expressed by playing them. The process of playing the notes can be thought of as converting the notes from the "sheet" domain into the "sound" domain. Each note played represents exactly what is on the paper just in a different way. This is precisely what the Fourier transform process is doing to the collected data of an x-ray diffraction. This is done in order to determine the electron density around the crystalline atoms in real space. The following equations can be used to determine the electrons' position: p(x,y,z)=∑h∑k∑lF(hkl)e−2πi(hx+ky+lz)(14.23.4) ∫10∫10∫10p(x,y,z)e2πi(hx+ky+lz)dxdydz(14.23.5) F(q)=|F(q)|eiϕ(q)(14.23.6) where p(xyz) is the electron density function, and F(hkl) is the electron density function in real space. Equation 1 represents the Fourier expansion of the electron density function. To solve for F(hkl), the equation 1 needs to be evaluated over all values of h, k, and l, resulting in Equation 2. The resulting function F(hkl) is generally expressed as a complex number (as seen in equation 3 above) with |F(q)| representing the magnitude of the function and ϕ representing the phase. Crystallization In order to run an x-ray diffraction experiment, one must first obtain a crystal. In organometallic chemistry, a reaction might work but when no crystals form, it is impossible to characterize the products. Crystals are grown by slowly cooling a supersaturated solution. Such a solution can be made by heating a solution to decrease the amount of solvent present and to increase the solubility of the desired compound in the solvent. Once made, the solution must be cooled gradually. Rapid temperature change will cause the compound to crash out of solution, trapping solvent and impurities within the newly formed matrix. Cooling continues as a seed crystal forms. This crystal is a point where solute can deposit out of the solution and into the solid phase. Solutions are generally placed into a freezer (-78 ºC) in order to ensure all of the compound has crystallized. One way to ensure gradual cooling in a -78 ºC freezer is to place the container housing the compound into a beaker of ethanol. The ethanol will act as a temperature buffer, ensuring a slow decrease in the temperature gradient between the flask and the freezer. Once crystals are grown, it is imperative that they remain cold as any addition of energy will cause a disruption of the crystal lattice, which will yield bad diffraction data. The result of an organometallic chromium compound crystallization can be seen below. Figure 14.23.6: Organometallic chromium crystals in a Schlenk under nitrogen Mounting the Crystal Due to the air-sensitivity of most organometallic compounds, crystals must be transported in a highly viscous organic compound called paratone oil (Figure 14.23.7). Crystals are abstracted from their respective Schlenks by dabbing the end of a spatula with the paratone oil and then sticking the crystal onto the oil. Although there might be some exposure of the compounds to air and water, crystals can withstand more exposure than solution (of the preserved protein) before degrading. On top of serving to protect the crystal, the paratone oil also serves as the glue to bind the crystal to the needle. Figure 14.23.7 Rotating Crystal Method To describe the periodic, three dimensional nature of crystals, the Laue equations are employed: a(cosθo–cosθ)=hλ(14.23.7) b(cosθo–cosθ)=kλ(14.23.8) c(cosθo–cosθ)=lλ(14.23.9) where a, b, and c are the three axes of the unit cell, θo, o, ?o are the angles of incident radiation, and ?, ?, ? are the angles of the diffracted radiation. A diffraction signal (constructive interference) will arise when h, k, and l are integer values. The rotating crystal method employs these equations. X-ray radiation is shown onto a crystal as it rotates around one of its unit cell axis. The beam strikes the crystal at a 90 degree angle. Using equation 1 above, we see that if θo is 90 degrees, then cosθo=0. For the equation to hold true, we can set h=0, granted that θ=90o. The above three equations will be satisfied at various points as the crystal rotates. This gives rise to a diffraction pattern (shown in the image below as multiple h values). The cylindrical film is then unwrapped and developed. The following equation can be used to determine the length axis around which the crystal was rotated: a=chλsintan−1(y/r where a is the length of the axis, y is the distance from h=0 to the h of interest, r is the radius of the firm, and ? is the wavelength of the x-ray radiation used. The first length can be determined with ease, but the other two require far more work, including remounting the crystal so that it rotates around that particular axis. Figure 14.23.8 X-ray Crystallography of Proteins The crystals that form are frozen in liquid nitrogen and taken to the synchrotron which is a highly powered tunable x-ray source. They are mounted on a goniometer and hit with a beam of x-rays. Data is collected as the crystal is rotated through a series of angles. The angle depends on the symmetry of the crystal. Figure 14.23.9: Top Left) This is a picture of a protein crystal mounted on a loop with respect to the UC Davis Structural Biology Lab; Bottom Right) This is a diffraction pattern created from the APS Kinase D63N Mutant of the above crystal with respect to the UC Davis Structural Biology Lab Proteins are among the many biological molecules that are used for x-ray Crystallography studies. They are involved in many pathways in biology, often catalyzing reactions by increasing the reaction rate. Most scientists use x-ray Crystallography to solve the structures of protein and to determine functions of residues, interactions with substrates, and interactions with other proteins or nucleic acids. Proteins can be co - crystallized with these substrates, or they may be soaked into the crystal after crystallization. Figure 14.23.10: Top Left) This is the structure of APS Kinase co - crystallized with ligands ADP and APS created via pymol by an undergrad working in the Structural Biology lab at UC Davis; Bottom right) This is the mutant overlay of APS kinase. The teal is the wild - type and the lime green is the mutant. D63 (from the wild-type) is mutated to asparagine. Images created by pymol by an undergrad working in the Structural Biology lab at UC Davis. Protein Crystallization Proteins will solidify into crystals under certain conditions. These conditions are usually made up of salts, buffers, and precipitating agents. This is often the hardest step in x-ray crystallography. Hundreds of conditions varying the salts, pH, buffer, and precipitating agents are combined with the protein in order to crystallize the protein under the right conditions. This is done using 96 well plates; each well containing a different condition and crystals; which form over the course of days, weeks, or even months. The pictures below are crystals of APS Kinase D63N from Penicillium chrysogenum taken at the Chemistry building at UC Davis after crystals formed over a period of a week. Figure 14.23.11: The experiment was originally proposed by Dr. Andy Fisher in the Structural Biology lab. Michelle Towles was a research assistant to Sean Gay and purified the protein and set up the crystal trays. References Skoog, D . A.; Holler, F. J.; Stanley R. C.; Principles of Instrumental Analysis; Thomson Brooks/Cole: Belmont CA, 2007. Sands, D. E.; Introduction to Crystallography; Dover Publications, Inc.; New York, 1975 Drenth, Jan. Principles of Protein x-ray Crystallography, 3rd edition. 2007, Springer Science + Business Media, LLC. pg. 14. Rhodes, Gale. Crystallography Made Crystal Clear, 3rd edition. 2006, Elsevier Inc. pg. 33, 55 - 57. Actual experimentation done of APS Kinase D63N Penicillium Chrysogenum. 14.22: NMR Used in Medicine is Called Magnetic Resonance Imaging 15: Aromaticity (Reactions of Benzene)
189375
https://chem.libretexts.org/Courses/University_of_Georgia/CHEM_3212%3A_Physical_Chemistry_II/01%3A_The_Properties_of_Gases/1.03%3A_The_Kinetic_Molecular_Theory_of_Gases
1.3: The Kinetic Molecular Theory of Gases - Chemistry LibreTexts Skip to main content Table of Contents menu search Search build_circle Toolbar fact_check Homework cancel Exit Reader Mode school Campus Bookshelves menu_book Bookshelves perm_media Learning Objects login Login how_to_reg Request Instructor Account hub Instructor Commons Search Search this book Submit Search x Text Color Reset Bright Blues Gray Inverted Text Size Reset +- Margin Size Reset +- Font Type Enable Dyslexic Font - [x] Downloads expand_more Download Page (PDF) Download Full Book (PDF) Resources expand_more Periodic Table Physics Constants Scientific Calculator Reference expand_more Reference & Cite Tools expand_more Help expand_more Get Help Feedback Readability x selected template will load here Error This action is not available. chrome_reader_mode Enter Reader Mode 1: The Properties of Gases CHEM 3212: Physical Chemistry II { } { "1.01:The_Empirical_Gas_Laws" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "1.02:_The_Ideal_Gas_Law" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "1.03:_The_Kinetic_Molecular_Theory_of_Gases" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "1.04:_Kinetic_Energy" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "1.05:_Grahams_Law_of_Effusion" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "1.06:_Collisions_with_Other_Molecules" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "1.07:_Real_Gases" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "1.08:_Intermolecular_Forces" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "1.09:_Specific_Interactions" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "1.0E:_1.E:_Gases(Exercises)" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "1.0S:1.S:_Gases(Summary)" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1" } { "00:_Front_Matter" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "01:_The_Properties_of_Gases" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "02:_The_Boltzmann_Factor_and_Partition_Functions" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "03:_Quantum_Review" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "04:_Partition_Functions_of_Model_Systems" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "05:_Energy_and_Enthalpy" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "06:_Entropy_Part_I" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "07:_Entropy_Part_II" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "08:_Gibbs_and_Helmholtz_Energies" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "09:_Phase_Equilibria" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "10:_Chemical_Equilibrium" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "11:_Solutions-_Liquid-Liquid_Solutions" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", Math_Review : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1", "zz:_Back_Matter" : "property get Map MindTouch.Deki.Logic.ExtensionProcessorQueryProvider+<>c__DisplayClass230_0.b__1" } Thu, 05 Dec 2019 23:21:01 GMT 1.3: The Kinetic Molecular Theory of Gases 199132 199132 Geoffrey Smith { } Anonymous Anonymous User 2 false false [ "article:topic", "Maxwell-Boltzmann distribution", "Kinetic Molecular Theory of Gases", "average speed in gas", "most probable speed in gas", "RMS speed in gas", "authorname:flemingp", "showtoc:yes", "license:ccbyncsa", "transcluded:yes", "source-chem-84297", "licenseversion:40" ] [ "article:topic", "Maxwell-Boltzmann distribution", "Kinetic Molecular Theory of Gases", "average speed in gas", "most probable speed in gas", "RMS speed in gas", "authorname:flemingp", "showtoc:yes", "license:ccbyncsa", "transcluded:yes", "source-chem-84297", "licenseversion:40" ] Search site Search Search Go back to previous article Sign in Username Password Sign in Sign in Sign in Forgot password Contents 1. Home 2. Campus Bookshelves 3. University of Georgia 4. CHEM 3212: Physical Chemistry II 5. 1: The Properties of Gases 6. 1.3: The Kinetic Molecular Theory of Gases Expand/collapse global location CHEM 3212: Physical Chemistry II Front Matter 1: The Properties of Gases 2: The Boltzmann Factor and Partition Functions 3: Quantum Review 4: Partition Functions of Model Systems 5: Energy and Enthalpy 6: Entropy, Part I 7: Entropy, Part II 8: Gibbs and Helmholtz Energies 9: Phase Equilibria 10: Chemical Equilibrium 11: Solutions- Liquid-Liquid Solutions Math Review Back Matter 1.3: The Kinetic Molecular Theory of Gases Last updated Dec 5, 2019 Save as PDF 1.2: The Ideal Gas Law 1.4: Kinetic Energy picture_as_pdf Full Book Page Downloads Full PDF Import into LMS Individual ZIP Buy Print Copy Print Book Files Buy Print CopyReview / Adopt Submit Adoption Report Submit a Peer Review View on CommonsDonate Page ID 199132 Patrick Fleming California State University East Bay ( \newcommand{\kernel}{\mathrm{null}\,}) Table of contents 1. Normalizing the Maxwell-Boltzmann Distribution 2. Calculating an Average from a Probability Distribution 3. Calculating the average value of v x 4. Calculating the Average Speed 5. Example 1.3.1: 1. Solution Theoretical models attempting to describe the nature of gases date back to the earliest scientific inquiries into the nature of matter and even earlier! In about 50 BC, Lucretius, a Roman philosopher, proposed that macroscopic bodies were composed of atoms that continually collide with one another and are in constant motion, despite the observable reality that the body itself is as rest. However, Lucretius’ ideas went largely ignored as they deviated from those of Aristotle, whose views were more widely accepted at the time. In 1738, Daniel Bernoulli (Bernoulli, 1738) published a model that contains the basic framework for the modern Kinetic Molecular theory. Rudolf Clausius furthered the model in 1857 by (among other things) introducing the concept of mean free path (Clausius, 1857). These ideas were further developed by Maxwell (Maxwell, Molecules, 1873). But, because atomic theory was not fully embraced in the early 20 th century, it was not until Albert Einstein published one of his seminal works describing Brownian motion (Einstein, 1905) in which he modeled matter using a kinetic theory of molecules that the idea of an atomic (or molecular) picture really took hold in the scientific community. In its modern form, the Kinetic Molecular Theory of gasses is based on five basic postulates. Gas particles obey Newton’s laws of motion and travel in straight lines unless they collide with other particles or the walls of the container. Gas particles are very small compared to the averages of the distances between them. Molecular collisions are perfectly elastic so that kinetic energy is conserved. Gas particles so not interact with other particles except through collisions. There are no attractive or repulsive forces between particles. The average kinetic energy of the particles in a sample of gas is proportional to the temperature. Qualitatively, this model predicts the form of the ideal gas law. More particles means more collisions with the wall ((p \propto n)) Smaller volume means more frequent collisions with the wall ((p \propto 1/V)) Higher molecular speeds means more frequent collisions with the walls ((p \propto T)) Putting all of these together yields p∝n⁢T V=k⁢n⁢T V which is exactly the form of the ideal gas law! The remainder of the job is to derive a value for the constant of proportionality (k) that is consistent with experimental observation. For simplicity, imagine a collection of gas particles in a fixed-volume container with all of the particles traveling at the same velocity. What implications would the kinetic molecular theory have on such a sample? One approach to answering this question is to derive an expression for the pressure of the gas. The pressure is going to be determined by considering the collisions of gas molecules with the wall of the container. Each collision will impart some force. So the greater the number of collisions, the greater the pressure will be. Also, the larger force imparted per collision, the greater the pressure will be. And finally, the larger the area over which collisions are spread, the smaller the pressure will be. p∝(number of collisions)×(force imparted per collision)a⁢r⁢e⁢a Figure 1.3.1: The "collision volume" is the subset of the total volume that contains molecules that will actually collide with area A in the time interval Δ⁢t. First off, the pressure that the gas exerts on the walls of the container would be due entirely to the force imparted each time a molecule collides with the interior surface of the container. This force will be scaled by the number of molecules that hit the area of the wall in a given time. For this reason, it is convenient to define a “collision volume”. V c⁢o⁢l=(v x⋅Δ⁢t)⋅A where v x is the speed the molecules are traveling in the x direction, (\Delta t) is the time interval (the product of v x·Δ⁢T gives the length to the collision volume box) and A is the area of the wall with which the molecules will collide. Half of the molecules within this volume will collide with the wall since half will be traveling toward it and half will be traveling away from it. The number of molecules in this collision volume will be given by the total number of molecules in the sample and the fraction of the total volume that is the collision volume. And thus, the number of molecules that will collide with the wall is given by N c⁢o⁢l=1 2⁢N t⁢o⁢t⁢V c⁢o⁢l V And thus the number of molecules colliding with the wall will be N c⁢o⁢l=1 2⁢N t⁢o⁢t⁢(v x⋅Δ⁢t)⋅A V The magnitude of that force imparted per collision will be determined by the time-rate of change in momentum of each particle as it hits the surface. It can be calculated by determining the total momentum change and dividing by the total time required for the event. Since each colliding molecule will change its velocity from v x to –v x, the magnitude of the momentum change is 2(mv x). Thus the force imparted per collision is given by F=2⁢(m⁢v x)Δ⁢t and the total force imparted is (1.3.1)F t⁢o⁢t=N c⁢o⁢l⁡2⁢(m⁢v x)Δ⁢t(1.3.2)=1 2⁢N t⁢o⁢t⁡[(v x⁢Δ⁢t)⁢A V]⁢2⁢(m⁢v x)Δ⁢t=N t⁢o⁢t⁡(m⁢v x 2 V)⁢A Since the pressure is given as the total force exerted per unit area, the pressure is given by p=F t⁢o⁢t A=N t⁢o⁢t⁡(m⁢v x 2 V)=N t⁢o⁢t⁡m V⁢v x 2 The question then becomes how to deal with the velocity term. Initially, it was assumed that all of the molecules had the same velocity, and so the magnitude of the velocity in the x-direction was merely a function of the trajectory. However, real samples of gases comprise molecules with an entire distribution of molecular speeds and trajectories. To deal with this distribution of values, we replace (v x 2) with the squared average of velocity in the x direction ⟨v x⟩2. (1.3.4)p=N t⁢o⁢t⁢m V⁢⟨v x⟩2 The distribution function for velocities in the x direction, known as the Maxwell-Boltzmann distribution, is given by: f⁡(v x)=m 2⁢π⁢k B⁢T⏟normalization term⁢exp⁡(−m⁢v x 2 2⁢k B⁢T)⏟exponential term This function has two parts: a normalization constant and an exponential term. The normalization constant is derived by noting that (1.3.5)∫−∞∞f⁡(v x)⁢d⁢v x=1 Normalizing the Maxwell-Boltzmann Distribution The Maxwell-Boltzmann distribution has to be normalized because it is a continuous probability distribution. As such, the sum of the probabilities for all possible values of v xmustbe unity. And since v­⁢x can take any value between -∞ and ∞, then Equation 1.3.5 must be true. So if the form of f⁡(v x) is assumed to be f⁡(v x)=N⁢exp−(m⁢v x 2 2⁢k B⁢T) The normalization constant N can be found from ∫−∞∞f⁡(v x)⁢d⁢v x=∫−∞∞N⁢exp⁡(−m⁢v x 2 2⁢k B⁢T)⁢d⁢v x=1 The expression can be simplified by letting α=m/2⁢k B⁢T. It is then more simply written N⁢∫−∞∞exp⁡(−m⁢v x 2 2⁢k B⁢T)⁢d⁢v x=1 A table of definite integrals says that ∫−∞∞e−a⁢x 2 d x=π a So N⁢π α=(m 2⁢π⁢k B⁢T)1/2 And thus the normalized distribution function is given by (1.3.6)f⁡(v x)=(m 2⁢π⁢k B⁢T)1/2⁢exp⁡(m⁢v x 2 2⁢k B⁢T) Calculating an Average from a Probability Distribution Calculating an average for a finite set of data is fairly easy. The average is calculated by x¯=1 N⁢∑i=1 N x i But how does one proceed when the set of data is infinite? Or how does one proceed when all one knows are the probabilities for each possible measured outcome? It turns out that that is fairly simple too! x¯=∑i=1 N x i⁢P i where P i is the probability of measuring the value x i. This can also be extended to problems where the measurable properties are not discrete (like the numbers that result from rolling a pair of dice) but rather come from a continuous parent population. In this case, if the probability is of measuring a specific outcome, the average value can then be determined by x¯=∫x⁢P⁡(x)d x where P⁡(x) is the function describing the probability distribution, and with the integration taking place across all possible values that x can take. Calculating the average value of v x A value that is useful (and will be used in further developments) is the average velocity in the x direction. This can be derived using the probability distribution, as shown in the mathematical development box above. The average value of v x is given by ⟨v x⟩=∫−∞∞v x(f⁡(v x)⁢d⁢x This integral will, by necessity, be zero. This must be the case as the distribution is symmetric, so that half of the molecules are traveling in the +x direction, and half in the –x direction. These motions will have to cancel. So, a more satisfying result will be given by considering the magnitude of v x, which gives the speed in the x direction. Since this cannot be negative, and given the symmetry of the distribution, the problem becomes ⟨|v x|⟩=2⁢∫0∞v x(f⁡(v x)⁢d⁢x In other words, we will consider only half of the distribution, and then double the result to account for the half we ignored. For simplicity, we will write the distribution function as f⁡(v x)=N⁢exp⁡(−α⁢v x 2) where N=(m 2⁢π⁢k B⁢T)1/2 and α=m 2⁢k B⁢T. A table of definite integrals shows ∫0∞x⁢e−a⁢x 2 d x=1 2⁢a so ⟨v x⟩=2⁢N⁡(1 2⁢α)=N α Substituting our definitions for N and α produces ⟨v x⟩=(m 2⁢π⁢k B⁢T)1/2⁢(2⁢k B⁢T m)=(2⁢π⁢k B⁢T π⁢m)1/2 This expression indicates the average speed for motion of in one direction. However, real gas samples have molecules not only with a distribution of molecular speeds and but also a random distribution of directions. Using normal vector magnitude properties (or simply using the Pythagorean Theorem), it can be seen that ⟨v⟩2=⟨v x⟩2+⟨v y⟩2+⟨v z⟩2 Since the direction of travel is random, the velocity can have any component in x, y, or z directions with equal probability. As such, the average value of the x, y, or z components of velocity should be the same. And so ⟨v⟩2=3⁢⟨v x⟩2 Substituting this into the expression for pressure (Equation 1.3.4) yields p=N t⁢o⁢t⁢m 3⁢V⁢⟨v⟩2 All that remains is to determine the form of the distribution of velocity magnitudes the gas molecules can take. One of the first people to address this distribution was James Clerk Maxwell (1831-1879). In his 1860 paper (Maxwell, Illustrations of the dynamical theory of gases. Part 1. On the motions and collisions of perfectly elastic spheres, 1860), proposed a form for this distribution of speeds which proved to be consistent with observed properties of gases (such as their viscosities). He derived this expression based on a transformation of coordinate system from Cartesian coordinates (x, y, z) to spherical polar coordinates (v, θ, ϕ). In this new coordinate system, v represents the magnitude of the velocity (or the speed) and all of the directional data is carried in the angles θ and ϕ. The infinitesimal volume unit becomes d⁢x d⁢y d⁢z=v 2⁢sin⁡(θ)⁢d⁢v d⁢θ d⁢ϕ Applying this transformation of coordinates, and ignoring the angular part (since he was interested only in the speed) Maxwell’s distribution (Equation 1.3.6) took the following form (1.3.7)f⁡(v)=N⁢v 2⁢exp⁡(m⁢v 2 2⁢k B⁢T) This function has three basic parts to it: a normalization constant (N), a velocity dependence (v 2), and an exponential term that contains the kinetic energy (½⁢m⁢v 2). Since the function represents the fraction of molecules with the speed v, the sum of the fractions for all possible velocities must be unity. This sum can be calculated as an integral. The normalization constant ensures that ∫0∞f⁡(v)d v=1 Choosing the normalization constant as N=4⁢π⁢(m 2⁢π⁢k B⁢T)3 yields the final form of the Maxwell distribution of molecular speeds. (1.3.8)N=4⁢π⁢(m 2⁢π⁢k B⁢T)3⁢v 2⁢exp⁡(m⁢v 2 2⁢k B⁢T) At low velocities, the v 2 term causes the function to increase with increasing v, but then at larger values of v, the exponential term causes it to drop back down asymptotically to zero. The distribution will spread over a larger range of speed at higher temperatures, but collapse to a smaller range of values at lower temperatures (Table 2.3.1). Figure 2.3.1: Maxwell Distribution of speeds for hydrogen molecules at differing temperatures. Calculating the Average Speed Using the Maxwell distribution as a distribution of probabilities, the average molecular speed in a sample of gas molecules can be determined. (1.3.9)⟨v⟩=∫−∞∞v⁢f⁡(v)d v(1.3.10)=∫−∞∞v 4⁢π⁢(m 2⁢π⁢k B⁢T)3⁢v 2⁢exp⁡(m⁢v 2 2⁢k B⁢T)d⁢v(1.3.11)=4⁢π⁢(m 2⁢π⁢k B⁢T)3⁢∫−∞∞v 3⁢exp⁡(m⁢v 2 2⁢k B⁢T)d⁢v The following can be found in a table of integrals: ∫0∞x 2⁢n+1⁢e−a⁢x 2 d x=n!2⁢a n+1 So ⟨v⟩=4⁢π⁢(m 2⁢π⁢k B⁢T)3⁢[1 2⁢(m 2⁢k B⁢T)2] Which simplifies to ⟨v⟩=(8⁢k B⁢T π⁢m)1/2 Note: the value of ⟨v⟩ is twice that of ⟨v x⟩ which was derived in an earlier example! ⟨v⟩=2⁢⟨v x⟩ Example 1.3.1: What is the average value of the squared speed according to the Maxwell distribution law? Solution (1.3.12)⟨v 2⟩=∫−∞∞v 2⁢f⁡(v)d v(1.3.13)=∫−∞∞v 2 4⁢π⁢(m 2⁢π⁢k B⁢T)3⁢v 2⁢exp⁡(m⁢v 2 2⁢k B⁢T)d⁢v(1.3.14)=4⁢π⁢(m 2⁢π⁢k B⁢T)3⁢∫−∞∞v 4⁢exp⁡(m⁢v 2 2⁢k B⁢T)d⁢v A table of integrals indicates that ∫0∞x 2⁢n⁢e−a⁢x 2 d x=1⋅3 c⁢d⁢o⁢t⁢5…(2⁢n−1)2 n+1⁢a n⁢π a Substitution (noting that n=2) yields ⟨v 2⟩=4⁢π⁢(m 2⁢π⁢k B⁢T)3⁢[1⋅3 2 3⁢(m 2⁢k B⁢T)2⁢π(m 2⁢k B⁢T)] which simplifies to ⟨v 2⟩=3⁢k B⁢T m Note: The square root of this average squared speed is called the root mean square (RMS) speed, and has the value v r⁢m⁢s=⟨v 2⟩=(3⁢k B⁢T m)1/2 The entire distribution is also affected by molecular mass. For lighter molecules, the distribution is spread across a broader range of speeds at a given temperature, but collapses to a smaller range for heavier molecules (Table 2.3.2). Figure 2.3.2: Maxwell Distribution of speeds at 800 K for different gasses of differing molecular masses. The probability distribution function can also be used to derive an expression for the most probable speed (v m⁢p), the average (v a⁢v⁢e), and the root-mean-square (v r⁢m⁢s) speeds as a function of the temperature and masses of the molecules in the sample. The most probable speed is the one with the maximum probability. That will be the speed that yields the maximum value of f⁡(v). It is found by solving the expression d d⁢v⁢f⁡(v)=0 for the value of v that makes it true. This will be the value that gives the maximum value of f⁡(v) for the given temperature. Similarly, the average value can be found using the distribution in the following fashion v a⁢v⁢e=⟨v⟩ and the root-mean-square (RMS) speed by finding the square root of the average value of v 2. Both demonstrated above. v r⁢m⁢s=⟨v 2⟩ This page titled 1.3: The Kinetic Molecular Theory of Gases is shared under a CC BY-NC-SA 4.0 license and was authored, remixed, and/or curated by Patrick Fleming. 2.3: The Kinetic Molecular Theory of Gases by Patrick Fleming is licensed CC BY-NC-SA 4.0. Toggle block-level attributions Back to top 1.2: The Ideal Gas Law 1.4: Kinetic Energy Was this article helpful? Yes No Recommended articles 2.3: The Kinetic Molecular Theory of GasesThe gas laws were derived from empirical observations. Connecting them to fundamental properties of the gas particles is subject of great interest. Th... 6.8: Statistics for Molecular SpeedsExpected values for several quantities can be calculated from the Maxwell-Boltzmann probability density function. 4.8: Statistics for Molecular SpeedsExpected values for several quantities can be calculated from the Maxwell-Boltzmann probability density function. 12: The Kinetic Molecular TheoryOur continuing goal is to relate the properties of the atoms and molecules to the properties of the materials which they comprise. Kinetic Molecular Theory of Gases Article typeSection or PageAuthorPatrick FlemingLicenseCC BY-NC-SALicense Version4.0Show Page TOCyes on pageTranscludedyes Tags average speed in gas Kinetic Molecular Theory of Gases Maxwell-Boltzmann distribution most probable speed in gas RMS speed in gas source-chem-84297 © Copyright 2025 Chemistry LibreTexts Powered by CXone Expert ® ? The LibreTexts libraries arePowered by NICE CXone Expertand are supported by the Department of Education Open Textbook Pilot Project, the UC Davis Office of the Provost, the UC Davis Library, the California State University Affordable Learning Solutions Program, and Merlot. We also acknowledge previous National Science Foundation support under grant numbers 1246120, 1525057, and 1413739. Privacy Policy. Terms & Conditions. Accessibility Statement.For more information contact us atinfo@libretexts.org. Support Center How can we help? Contact Support Search the Insight Knowledge Base Check System Status× contents readability resources tools ☰ 1.2: The Ideal Gas Law 1.4: Kinetic Energy Complete your gift to make an impact
189376
https://www.cs.jhu.edu/~misha/MyPapers/EUROG20.pdf
EUROGRAPHICS 2020 / U. Assarsson and D. Panozzo (Guest Editors) Volume 39 (2020), Number 2 Polygon Laplacian Made Simple Astrid Bunge1† Philipp Herholz2† Misha Kazhdan3 Mario Botsch1 1Bielefeld University, Germany 2ETH Zurich, Switzerland 3Johns Hopkins University, USA Abstract The discrete Laplace-Beltrami operator for surface meshes is a fundamental building block for many (if not most) geometry pro-cessing algorithms. While Laplacians on triangle meshes have been researched intensively, yielding the cotangent discretization as the de-facto standard, the case of general polygon meshes has received much less attention. We present a discretization of the Laplace operator which is consistent with its expression as the composition of divergence and gradient operators, and is applicable to general polygon meshes, including meshes with non-convex, and even non-planar, faces. By virtually inserting a carefully placed point we implicitly refine each polygon into a triangle fan, but then hide the refinement within the matrix assembly. The resulting operator generalizes the cotangent Laplacian, inherits its advantages, and is empirically shown to be on par or even better than the recent polygon Laplacian of Alexa and Wardetzky [AW11] — while being simpler to compute. CCS Concepts • Computing methodologies →Mesh geometry models; • Theory of computation →Computational geometry; 1. Introduction The Laplace-Beltrami operator, or Laplacian for short, plays a prominent role in geometric modeling and related fields. As a gen-eralization of the second derivative to functions defined on surfaces it is intimately related to the notion of curvature and signal frequen-cies. With triangle meshes being the standard surface representation in computer graphics and geometry processing, the discretization of the Laplace operator for triangular elements has received a lot of attention over the years, with the classical cotangent discretiza-tion [Mac49,Dzi88,PP93,DMSB99] being the de-facto standard. Discrete Laplacians for polygon meshes have been much less well investigated, even though polygon meshes, in particular quad meshes, are ubiquitous in modern production pipelines (and higher-order polygons are considered more noble in the 1884 novel Flat-land [Abb84]). A straightforward approach would be to triangulate all polygons and apply the well-defined triangle-based operators. Unfortunately this approach does not work well in many cases. Polygon meshes are commonly designed to align to certain surface features and capture symmetries of the shape. Introducing an arbi-trary triangulation can break these properties and lead to noticeable artifacts. Alexa and Wardetzky [AW11] proposed a discrete Lapla-cian that directly operates on two-manifold polygon meshes and avoids these problems. However, their operator relies on a suitable choice of parameter, which can considerably influence the results and might not be easy to find, as we show in Section 5. † the first two authors contributed equally In this paper we propose a surprisingly simple, yet very effective discretization of the polygon Laplace operator. Inserting a carefully placed vertex for every polygon allows us to define a refined trian-gulation that retains the symmetries and defines a sensible surface consistent with the polygon mesh. Using the Galerkin method, we then coarsen the cotangent Laplacian defined over the triangulation to obtain a Laplacian on the original polygon mesh, completely hid-ing the auxiliary points from the user by encapsulating them within the matrix assembly stage. Our discrete Laplacian operator acts directly on functions de-fined on the vertices of a general polygon mesh. It works accu-rately and robustly even in the presence of non-planar and non-convex faces. By leveraging the cotangent Laplacian in the core of our discretization, we inherit all its benefits. The absence of tweakable parameters makes our operator intuitive and easy to use in practice. Even though our method is quite simple and easy to implement, we will make our source code freely available at 2. Related Work Laplacians for triangle meshes The main goal when construct-ing a discrete Laplacian is to retain as many properties from the smooth setting as possible. While the classical cotangent Laplace operator [Mac49] is negative semi-definite, symmetric, and has linear precision—meaning that the operator yields zero for linear functions defined over a planar domain—it fails to maintain the maximum principle. As a consequence, parametrizations obtained c ⃝2020 The Author(s) Computer Graphics Forum c ⃝2020 The Eurographics Association and John Wiley & Sons Ltd. Published by John Wiley & Sons Ltd. A. Bunge, P. Herholz, M. Kazhdan, M. Botsch / Polygon Laplacian Made Simple with this discretization can suffer from flipped triangles. In con-trast, the combinatorial Laplacian [Tau95, Zha04] guarantees the maximum principle while failing to have linear precision. Bobenko and Springborn [BS07] introduced a discrete Laplacian based on the intrinsic Delaunay triangulation. While guaranteeing the max-imum principle, this operator is defined over an intrinsic mesh with different connectivity. Recently, an efficient data structure for the representation of these meshes has been introduced [SSC19]. The idea of intrinsic triangulations does not extend to the case of non-planar polygon meshes due to the lack of a well defined surface. Other discretizations include the Laplacian of Belkin et al. [BSW08] that provides point-wise convergence and the octree-based Laplacian [CLB∗09] defined to support multigrid solvers. In general, Wardetzky et al. [WMKG07] have shown that there cannot exist a discretization that fulfills a certain set of properties for all meshes, explaining the variety of approaches in the literature. Laplacians for polygon meshes Alexa et al. [AW11] construct a discrete Laplacian for general polygon meshes. They circum-vent the problem that (non-planar) polygons in 3D do not bound a canonical surface patch by considering the projection of the poly-gon onto the plane that yields the largest projection area. Their polygon Laplace operator yields the gradient of this area when ap-plied to the vertex coordinates. However, the action of this oper-ator on the component orthogonal to this projection is defined al-gebraically rather than geometrically, involving parameters without an obvious interpretation. Another potential problem with this con-struction is a relatively large number of negative coefficients, caus-ing the maximum principle to be violated. Xiong et al. [XLH11] define a discrete Laplace operator for the special case of quadrilat-eral meshes by averaging over both triangulations of each quad. Virtual refinement in geometry processing To extend the cotan-gent Laplacian to polygon meshes, we refine the mesh by insert-ing a virtual vertex for each face and using these to tessellate each polygon into a triangle fan. Similar to recent work on Subdivision Exterior Calculus [dGDMD16], we define a prolongation opera-tor expressing functions on the coarser polygonal mesh as linear combinations of the triangle hat basis functions on the finer mesh. Then, leveraging the Galerkin method [Fle84], we define the Lapla-cian on the polygon mesh by coarsening the Laplacian from the triangle mesh. As in the method of de Goes et al. [dGDMD16], this gives us the benefit of working over a refined triangle mesh (where discrete Laplacians are well understood) without incurring the computational complexity of working on a finer mesh. Sample applications Applications of Laplacians include the ap-proximation of conformal parametrizations [DMA02], mesh defor-mation [SCOL∗04], and signal processing on meshes [CRK16], to name a few. Replacing the usual cotangent Laplacian with a Lapla-cian defined on polygon meshes directly enables many of these al-gorithms to work in this more general setting. However, the quality of the results depends on the specific construction and properties of the polygon Laplacian. In this work we compare different vari-ants with respect to a set of applications including parametrization [Flo97,GGT06], mean curvature flow [DMA02,KSBC12], spectral analysis [LZ10], and geodesics in heat [CWW13]. 3. Math Review Our approach for defining a polygon Laplacian proceeds in two steps. First, we refine the polygon mesh to define a triangle mesh on which the standard cotangent Laplacian is defined. Then, to ob-tain a Laplacian on the initial polygon mesh, we use the Galerkin method to coarsen the cotangent Laplacian. In the following we briefly review both the derivation of Laplacians on triangle meshes [BKP∗10] and the Galerkin method [Fle84]. Laplacians on triangle meshes Consider a triangle mesh M = (V,T) with vertices V and triangles T. Let |V| and |T| be the numbers of vertices and triangles, respec-tively, and let {φ1,...,φ|V|} be the hat basis (with φi the piecewise linear function that is equal to one at vertex vi and zero at all other vertices). The Laplace operator is discretized as L = M−1 ·S, (1) with M,S ∈R|V|×|V| denoting the mass and stiffness matrices Mi j = Z M φi ·φ j =          |tijk|+|t jih| 12 if j ∈N(i) ∑ k∈N(i) Mik if j = i 0 otherwise , (2) Si j = − Z M ⟨∇φi,∇φ j⟩=          cotαij+cotβij 2 if j ∈N(i) , −∑ k∈N(i) Sik if j = i, 0 otherwise. (3) Here ti jk and tjih are the triangles incident on edge (vi,vj) and ti jk and t jih are their areas; αi j and βi j are the angles opposite edge (vi,vj); N(i) denotes the index set of vertices in the one-ring neigh-borhood of vertex vi. Note that our stiffness matrix S is negative semi-definite (like the Laplace matrix L). This sign choice makes it consistent with the cotangent matrix in graphics. FEM literature typically uses −S as the stiffness matrix. Decomposition into divergence and gradient Following the expression of the Laplacian as the divergence of the gradient in the continuous case, a similar expression is obtained in the discrete case. The gradient can be expressed as the matrix G ∈(R3)|T|×|V| with Gi j =    ni×ej i 2|ti| if vj ∈ti , 0 otherwise, where ni is the unit outward normal of triangle ti and ej i is the counter-clockwise oriented edge of triangle ti opposite vertex vj. Then the divergence is D = −GT · e M, (4) where e M is a block diagonal matrix whose i-th block consists of the 3×3 identity matrix multiplied by the area of the i-th triangle. c ⃝2020 The Author(s) Computer Graphics Forum c ⃝2020 The Eurographics Association and John Wiley & Sons Ltd. A. Bunge, P. Herholz, M. Kazhdan, M. Botsch / Polygon Laplacian Made Simple The product of divergence and gradient gives the stiffness matrix S = D·G = −GT · e M·G (5) and the Laplacian becomes L = −M−1 ·GT · e M·G. (6) In practice, the mass matrix is often approximated by a diagonal (lumped) mass matrix with the i-th diagonal entry in the lumped matrix set to the sum of entries in the i-th row of the original matrix. This makes inversion of M straightforward. The Galerkin method Assume that we are given two finite-dimensional nesting function spaces Fc ⊂F f . Given a prolongation operator P injecting the coarser space into the finer, P: Fc , →F f , and given a symmetric positive semi-definite operator Q defining a quadratic energy on the space of functions, we can define a symmetric positive semi-definite operator on the space F f through restriction Qf (φ,ψ) := Q(φ,ψ), ∀φ,ψ ∈F f . (7) In a similar manner, we can define a symmetric positive semi-definite operator Qc on the space Fc. The Galerkin method tells us that the operators are related by Qc = P∗◦Q f ◦P, (8) where P∗is the dual of P. In particular, given bases for Fc and F f and letting P, Qc, and Q f be the matrices representing the operators with respect to these bases, we have Qc = PT ·Q f ·P. (9) 4. Polygon Laplacian When working with general polygon meshes, the finite elements method cannot be applied directly for two reasons. First, when the polygon is not planar, it is not clear what the underlying surface is over which functions should be integrated. Second, even for simple planar polygons, it is not clear how to associate basis functions with the vertices of the mesh. A simple approach would be to triangulate all the polygons, defining a piecewise-linear integration domain, and use the hat ba-sis functions to define the finite elements. This would have the advantage of defining a finite elements system whose dimension equals the number of vertices in the input mesh. Unfortunately, the introduction of diagonal edges would also break the symmetry structure of the polygonization (e.g., in quad meshes where edges are aligned with principal curvature directions). An alternate approach would be to refine the polygon mesh by introducing a new vertex in the middle of each face. This would preserve the symmetry structure, but would come at the cost of an increase in the finite elements system dimension. Also, as we show in Section 5, such an approach can result in poor performance due to the introduction of negative edge weights in the stiffness matrix. (For example, refining a rectangle by adding the mid-point creates two triangles with angles larger than π/2.) Figure 1: Spanned triangle fan on the virtual mesh after inserting the virtual vertex x|V|+j at the j-th face. We propose an in-between approach – introducing a virtual ver-tex in the middle of each face, expressed as the affine combination of the face’s vertices. Naively, the new vertex produces a refined triangle mesh with a finite elements system given by the hat basis functions, as before. However, we then coarsen the refined system, expressing the vertex functions on the coarse mesh as linear com-binations of the hat basis functions on the finer one. This new system has dimension equal to the number of vertices in the original polygon mesh (preserving the finite elements system dimension) and has the property that basis functions have overlap-ping support only if the associated vertices lie on a common face (defining a sparse system that preserves the symmetry structure). A further advantage of our approach is that we easily obtain a con-sistent factorization of the stiffness matrix as the product of diver-gence and gradient matrices. 4.1. Construction of the finite elements system Given a polygon mesh M = (V,F), we construct our finite ele-ments system by defining a refined triangle mesh M f = (V f ,T f ). Vertices in the refined mesh, V f , are obtained by introducing a new vertex, v|V|+ j, for every face f j ∈F and setting the position of v|V|+ j to an affine combination of the positions of vertices in f j x|V|+ j = ∑ vi∈f j w jixi, with ∑ vi∈f j w ji = 1. (10) Triangles in the refined mesh, T f , are obtained by replacing each face f j ∈F with the triangle fan connecting the inserted vertex v|V|+ j to the edges in f j, as can be seen in Figure 1. Using the refined triangle mesh, we can define the stiffness ma-trix, Sf ∈R|V f|×|V f|, using the standard cotangent weights of Equation (3). Aggregating the affine weights, we get the prolon-gation matrix P ∈R|V f|×|V|, with Pi j =      1 if i = j, wki if i = |V|+k and vj ∈fk , 0 otherwise. (11) And finally, using the Galerkin method, we obtain an expression for the coarsened polygonal stiffness matrix as S = PT ·Sf ·P. (12) c ⃝2020 The Author(s) Computer Graphics Forum c ⃝2020 The Eurographics Association and John Wiley & Sons Ltd. A. Bunge, P. Herholz, M. Kazhdan, M. Botsch / Polygon Laplacian Made Simple We use the same approach to obtain the coarsened polygonal mass matrix M, but in addition to restricting the refined triangle mass matrix M f , we also lump it to a diagonal matrix: M = lump  PT ·Mf ·P  with (13) lump(M)ij = ( ∑k Mik if i = j, 0 otherwise, (14) setting the diagonal entry to the sum of the entries in the row. We note that our stiffness matrix has non-zero entries if the cor-responding vertices share a face, not just an edge. We also note that solving a linear system using our operators is different from solving the system on the refined mesh and only using the solution values at the original (i.e., non-virtual) vertices. Our approach corresponds to solving the system using a coarser function space. The alterna-tive corresponds to sub-sampling a solution obtained over a refined function space, which could result in aliasing. 4.2. Choice of virtual vertex position Choosing different virtual vertex positions, we provide a frame-work for defining a whole family of polygon Laplace operators. While many choices are possible we would like the virtual vertex to fulfill a set of properties in order to yield a geometric Laplacian: • The vertex should be efficient to compute and uniquely defined. • Since the choice of virtual vertex defines the surface of the poly-gon, we would like to find a point giving small surface area. • For planar polygons, the vertex should be inside the polygon. • To achieve linear precision, the virtual vertex should be an affine combination of polygon vertices (see Equation (10)). A straightforward choice is to use the centroid of the polygon vertices. However, for non-convex (planar) polygons this point can be located outside the polygon. Moreover, it will be biased by an uneven distribution of polygon vertices. Another choice is to find a point minimizing the area of the induced triangle fan, which is related to the minimization of the Dirichlet energy. Unfortunately, this point is not uniquely defined. For example, for convex planar polygons, the total area will be identical for every virtual vertex inside the polygon. Furthermore, since area is non-linear in the position of the virtual vertex, less ef-ficient iterative solvers, e.g. Newton’s method, are required to com-pute the position of the virtual vertex. Instead, we opt for the minimizer of the sum of squared triangle areas of the induced triangle fan. Introducing a virtual vertex with position xf into an n-gon with vertex positions (x1,...,xn) allows us to define the triangle fan with n triangles (xi,xi+1,xf ), where indices are interpreted modulo n. The position of the virtual vertex, xf ∈R3, is then defined as the minimizer xf = argmin x n ∑ i=1 area xi,xi+1,x f 2 . (15) Compared to the minimizer of the area, the squared area mini-mizer has two advantages. First, the solution is unique even for planar convex polygons. Second, using the squared area, the ob-jective function becomes quadratic in xf . Also, in contrast to the centroid, the minimizer of the squared area tends to be positioned in the interior of the polygon, even when the polygon is not convex. While the position of our virtual vertex is unique, its expres-sion as the linear combination of polygon vertices usually is not. Since linear precision requires xf to be an affine combination of the polygon’s vertices (see Section 4.3), finding the position of the virtual vertex can be formulated as a minimization directly over the weights w = (w1,...,wn)T: w f = argmin w n ∑ i=1 area xi,xi+1, n ∑ j=1 w jxj !2 such that n ∑ j=1 w j = 1. (16) Noting that when n > 3 the system is under-constrained (with mul-tiple affine weights defining the same unique squared-area mini-mizer xf ), we add the constraint that, of all the weights w defining the minimizing position x f , we prefer the one with minimal L2-norm ∥w∥. This encourages the weights to be as uniform as pos-sible, allowing each polygon vertex to contribute equally. These affine weights can be obtained by solving a single linear system, derived in Appendix A. It is tempting to add non-negativity constraints w j ≥0 to Equa-tion (16) since this would yield convex weights that guarantee a maximum principle for the virtual vertex and prevent negative co-efficients in the coarsened stiffness matrix S (see Equation (12)) as long as they do not appear in Sf . However, this comes at the cost of solving a quadratic program with inequality constraints. We compare our choice of virtual vertex—minimizing the sum of squared triangle areas through affine weights—to the other op-tions (centroid, min. area, convex weights) in Section 5.7. 4.3. Properties of the operator One goal of our construction is to preserve the beneficial numeri-cal properties of the cotangent Laplacian. Symmetry and negative semi-definiteness follow directly from our construction of the stiff-ness matrix S = PT · S f · P, since the refined (cotangent) stiffness matrix Sf has these properties and since the prolongation matrix P has full rank. (Note that Wardetzky and Alexa [WMKG07,AW11] aim for positive definiteness, since they define their geometric Laplacian as the negative of ours.) For linear precision we require (S·u)i = 0 whenever vertex vi is not part of the boundary, all incident polygons f j ∋vi are coplanar, and the values of u in the one-ring of vi are obtained by sampling a linear function on the plane. To see that this is the case, we note that by constraining ourselves to use affine weights in defining the prolongation, we ensure that prolonging u to the finer mesh, the values of uf = P · u sample a linear function at all vertices of the fan triangulations incident to vi. Since the cotangent Laplacian has linear precision, this implies that Sf ·uf is zero at vi and all virtual centers v|V|+j with f j ∋vi. Since these are precisely the entries at which the i-th column of P is non-zero, it follows that (PT · Sf · uf )i = 0. Or, equivalently, (S·u)i = 0. It follows that if the initial mesh M is a triangle mesh, then the derived stiffness matrix S is the standard cotangent Laplacian. This c ⃝2020 The Author(s) Computer Graphics Forum c ⃝2020 The Eurographics Association and John Wiley & Sons Ltd. A. Bunge, P. Herholz, M. Kazhdan, M. Botsch / Polygon Laplacian Made Simple is because using affine weights, the finite elements basis defined on the coarse mesh through prolongation is precisely the hat basis. The property of positive off-diagonal values does not hold for the cotangent Laplacian and consequently cannot extend to our construction, resulting in the potential violation of the maximum principle. The lack of positivity is also present in the definition of [AW11], however, our operator tends to contain fewer negative off-diagonal coefficients (see Section 5). 4.4. Implementation Implementing our system is simple, especially if code for the com-putation of the cotangent Laplacian is already available. Our im-plementation is based on the PMP-Library [SB19]. Algorithm 1 illustrates the procedure. Initially we set the prolongation matrix P to the identity of size |V| × |V| (line 3), reflecting the fact that all original vertices appear in the refined mesh. We then loop over all faces, collecting all vertex positions of the polygon, finding affine weights for the optimal virtual vertex, and constructing this point (lines 5–7, Section 4.2). Next we compute the area and cotangent weights for the resulting triangle fan and aggregate them in Mf and Sf (lines 9–10, Section 3). The final operators are constructed by multiplying the matrices Mf and Sf defined on the refined mesh with the prolongation matrix and its transpose (line 12, Section 4.1). Algorithm 1: Assemble polygon stiffness and mass matrix Data: Mesh vertices V and faces F Result: Mass matrix M, stiffness matrix S. 1 S f = 0|V f |×|V f | 2 M f = 0|V f |×|V f | 3 P = I|V|×|V| 4 foreach f ∈F do 5 X ←getVertexPositions( f); 6 w ←findVirtualVertexWeights(X); 7 x ←XT ·w; 8 P ←appendWeightRow(P, f,w); 9 Mf ←Mf +buildTriangleFanMassMatrix( f,X,x); 10 Sf ←Sf +buildTriangleFanCotan(f,X,x); 11 end 12 return M = PT ·M f ·P, S = PT ·Sf ·P 4.5. Gradient and divergence We are also able to factor the stiffness matrix as the product of di-vergence and gradient matrices. As described in Section 3, we can obtain a matrix expression for the gradient and divergence opera-tors over the refined mesh, G f and D f and use those to factor the refined triangle stiffness matrix Sf = Df ·G f . (17) Coarsening, we obtain a factorization of the polygon stiffness ma-trix in terms of coarse polygon divergence and gradient matrices S = PT ·D f | {z } =D · G f ·P | {z } =G . (18) Note that the derived gradient operator, G = Gf · P, is a map from functions defined on the vertices of the original polygon mesh to tangent vector fields that are constant on triangles of the refined mesh. These refined triangles, however, can uniquely be identified with half-edges of the original polygon mesh, so that the refined tri-angles never have to be constructed explicitly. Hence, the gradient operator maps from function values at vertices to vectors at half-edges (and conversely for the divergence operator, D = P⊤·Df ). 4.6. Finite Elements Exterior Calculus An alternative approach is to define differential operators by ex-tending Finite Elements Exterior Calculus to polygon meshes. We do this by using our coarsened basis to define Whitney bases for higher-order forms and then using these higher-order bases to de-fine the Hodge star operators. Recall Given a basis of zero-forms {ψi} forming a partition of unity, one can define Whitney bases [Whi57,AFW06] for 1-forms {ψi j} (with i < j and supp(ψi)∩supp(ψj) ̸= ∅) and 2-forms {ψi jk} (with i < j < k and supp(ψi) ∩supp(ψj) ∩supp(ψk) ̸= ∅) by set-ting: ψi j = ψi ·dψj −ψj ·dψi , ψi jk = 2 ψi ·dψj ∧ψk −ψj ·dψi ∧dψk +ψk ·dψi ∧dψ j  . The exterior derivatives are then defined using the combinatorics d0 (i j)k =    −1 i = k 1 j = k 0 otherwise , d1 (i jk)(lm) =        1 i = l, j = m 1 j = l, k = m −1 i = l, k = m 0 otherwise (19) and the discrete Hodge stars are defined using the geometry ⋆ ⋆ ⋆0 i j = ⟨⟨ψi,ψj⟩⟩, (20) ⋆ ⋆ ⋆1 (i j)(kl) = ⟨⟨ψi j,ψkl⟩⟩, (21) ⋆ ⋆ ⋆2 (i jk)(lmn) = ⟨⟨ψi jk,ψlmn⟩⟩, (22) with ⟨⟨·,·⟩⟩denoting the integral of the (dot-)product of k-forms. Using the hat basis on a triangle mesh, the basis functions are lin-ear within each triangle so computing the coefficients reduces to integrating quadratic polynomials over a triangle. Prolonging higher-order forms Given the hat basis on the refined triangle mesh {φ f 1,...,φ f |V f |}, defining a prolongation operator P is equivalent to defining a coarsened basis {φ1,...,φ|V|} on the triangle mesh, with φj = ∑i Pi jφf i . As both bases form a partition of unity, we can define Whitney bases for higher-order forms. In doing so we get: φi j = ∑ k,l PkiPl jφf kl and φi jk = ∑ l,m,n PliPm jPnkφ f lmn, (23) which gives prolongation operators for 0-, 1-, and 2-forms: P0 i j = Pi j, (24) P1 (i j)(kl) = Pik ·Pjl, (25) P2 (i jk)(lmn) = Pil ·P jm ·Pkn. (26) c ⃝2020 The Author(s) Computer Graphics Forum c ⃝2020 The Eurographics Association and John Wiley & Sons Ltd. A. Bunge, P. Herholz, M. Kazhdan, M. Botsch / Polygon Laplacian Made Simple This allows us to define Hodge star operators for the coarsened basis through prolongation: ⋆ ⋆ ⋆k = (Pk)T ·⋆ ⋆ ⋆k,f ·Pk, (27) where ⋆ ⋆ ⋆k is the discrete Hodge star for k-forms defined using the coarsened basis and ⋆ ⋆ ⋆k,f is the discrete Hodge star for k-forms de-fined using the refined hat basis. In particular, this gives a factorization of the stiffness matrix as S = −(d0)T · ⋆ ⋆ ⋆1 · d0, with gradients represented in terms of dif-ferences across polygon edges/diagonals. The minus sign is due to our design choice of a negative semi-definite stiffness matrix, as discussed in Section 3. 5. Evaluation We evaluate our Laplacian in a variety of different geometry pro-cessing operations, applied to a selection of polygon meshes (Fig-ures 2 and 3). These results are compared to two other methods. First, we triangulate each (potentially non-planar, non-convex) n-gon into n −2 triangles without inserting an extra point, and then use the standard cotangent Laplacian for triangle meshes. Using the dynamic-programming approach of Liepa [Lie03] we find the poly-gon triangulation that minimizes the sum of squared triangle areas (similar in concept to our squared area minimization). We also ex-perimented with the triangulation that maximizes the minimum tri-angle angle (similar to planar Delaunay triangulations). While the latter yields better-behaved cotangent weights, it tends to produce flips or fold-overs for non-convex polygons, so we used the for-mer for most experiments. Second, we compare our results to the ones obtained with the polygon Laplacian of Alexa and Wardet-zky [AW11], based on their original implementation, using various values for their parameter λ that assigns weight to the component perpendicular to the projection direction. 5.1. Conformalized mean curvature flow A common approach for smoothing meshes is to use implicit in-tegration to solve the diffusion equation [DMSB99]. In our appli-cation we use the conformalized Mean Curvature Flow introduced by Kazhdan et al. [KSBC12], which obtains the coordinates of the mesh vertices at the next time-step, Xt+ε from the coordinates at the current time-step, Xt, iteratively solving the linear system  Mt −εS0 Xt+ε = MtXt (28) with ε the time-step, S0 the initial stiffness matrix, and Mt the mass matrix at time t. After each iteration the mesh is translated back to the origin and re-scaled to its original surface area. Figure 4 demon-strates the resilience of our Laplacian to noise and non-planar as well as non-convex faces. The flow can recover the spherical shape after one iteration and converges after 10 iterations. Figure 5 shows an example of this flow after one and ten iter-ations respectively. The mean curvature is color-coded and shows that the mesh correctly converges to a sphere when using our oper-ator. This example shows that even extreme differences in polygon scale are handled correctly. The results of Alexa and Wardetzky’s REGULAR SPHERE NOISY SPHERE HEX SPHERE FINE SPHERE Figure 2: Spherical meshes used for testing the accuracy of spher-ical harmonics and mean curvature. QUADS 1 QUADS 2 L-SHAPED TETRIS 1 TETRIS 2 Figure 3: Planar meshes used for the evaluation of geodesic dis-tances, including non-convex and non-star shaped tessellations. Figure 4: Stress test for smoothing on a noisy sphere (left). After one (center) and ten iterations (right). Figure 5: Visualization of the mean curvature flow on a quad mesh. Mean curvature is color coded. c ⃝2020 The Author(s) Computer Graphics Forum c ⃝2020 The Eurographics Association and John Wiley & Sons Ltd. A. Bunge, P. Herholz, M. Kazhdan, M. Botsch / Polygon Laplacian Made Simple Ours [AW11] Figure 4 5.075×10−3 5.417×10−3 Figure 5 2.271×10−3 2.69×10−3 Figure 6 1.157×10−3 1.204×10−3 Table 1: Conformal distortion of conformal spherical [KSBC12] and spectral free-boundary [MTAD08] parametrizations. [AW11] Mesh Ours [Lie03] λ = 0.1 λ = 0.5 λ = 1 REGULAR SPHERE 0.0168 0.0469 0.0290 0.0290 0.0290 NOISY SPHERE 0.0469 0.1053 0.0562 0.0814 0.1551 HEX SPHERE 0.0016 0.3711 0.0023 0.0038 0.0075 FINE SPHERE 0.1334 0.0623 0.0279 0.0107 0.0141 Table 2: Root-mean-square error (RMSE) of the curvature devia-tion for different meshes of the unit sphere. operator are qualitatively very similar, however, analyzing the con-formal error shows a small advantage for our method (see Table 1). We evaluated the conformal distortion by computing the root mean square errors of area-weighted angle differences between the mesh and its spherical parametrization. We opted for this measure be-cause it is not obvious how to compute conformal distortion for non-planar polygons. 5.2. Mean curvature estimation Noting that the Laplacian of the coordinate function is the mean-curvature normal vector, we can use our Laplace operator to ap-proximate the mean curvature H at vertex vi H(vi) := 1 2 (LX)i ·sign (LX)i , ni  (29) with X the matrix of vertex coordinates and ni the normal at vi. Since the mean curvature is 1 at every point on the unit sphere, we can measure the accuracy of our Laplacian by comparing the es-timated mean curvature to the true one. Table 2 gives the root mean square error (RMSE), comparing curvature estimates over differ-ent polygonizations of the sphere, and using different definitions of the Laplace operator. As the table shows, our operator outperforms existing methods on most spherical meshes. 5.3. Parametrization We compare our operator to [AW11] with respect to conformal parametrization based on Mullen et al.’s [MTAD08] spectral free-boundary parametrization. Figure 6 illustrates that both operators perform well for this quad mesh. In this case we obtain a confor-mal distortion (c.f. Section 5.1) of 1.157×10−3 while the method of [AW11] achieves 1.204×10−3. For [AW11] we systematically tried to find the best parameter λ and consistently observed very similar behavior with a slightly lower conformal distortion for our operator, across a variety polygon meshes. Figure 6: Parametrization of the right half of a quadrangulated monkey head. The result of [AW11] (left) is similar to ours (right), but our operator has slightly lower conformal distortion. 5.4. Reproducing the spherical harmonics The eigenfunctions of the Laplacian form an orthonormal basis known as the “manifold harmonics” [VL08]. In the case that the surface is a sphere, these eigenfunctions are known to be the spher-ical harmonics. We evaluate the quality of our Laplacian by mea-suring the extent to which the true spherical harmonics are eigen-vectors of the Laplacian. We know that the spherical harmonic Y m l : S2 →R is an eigen-function of the Laplacian with eigenvalue −l · (l + 1). Sampling Y m l at the vertices of the spherical mesh, we obtain ym l ∈R|V|. We measure the accuracy of our Laplacian by evaluating ym l + 1 l ·(l +1)L·ym l 2 M , (30) with the L2-norm computed relative to the mass matrix M. Table 3 compares the spectral error, giving the sum of errors over the spherical harmonics in the first nine frequencies (1 ≤l ≤9). As the table shows, in this experiment we are unable to reproduce the accuracy of Alexa and Wardetzky’s Laplacian. However, our construction does not require selecting a parameter on a case-by-case basis as in [AW11]. [AW11] Mesh Ours [Lie03] λ = 0.1 λ = 0.5 λ = 1 REGULAR SPHERE 0.0393 0.0200 0.0599 0.0220 0.0175 NOISY SPHERE 0.0643 0.0722 0.0969 0.0589 0.1366 HEX SPHERE 7.41e-7 0.0037 9.93e-6 1.46e-6 1.13e-6 FINE SPHERE 0.0003 6.73e-5 0.0005 6.48e-5 8.98e-5 Table 3: We measure the extent to which the spherical harmonics of frequencies 1 ≤l ≤9 are eigenvectors, with eigenvalue −l ·(l +1), of the Laplacian on spherical meshes. c ⃝2020 The Author(s) Computer Graphics Forum c ⃝2020 The Eurographics Association and John Wiley & Sons Ltd. A. Bunge, P. Herholz, M. Kazhdan, M. Botsch / Polygon Laplacian Made Simple a) ours b) [AW11] c) refinement d) max-angle e) min-area f) intrinsic Delaunay Figure 7: Computing geodesic distances on a quad mesh (top left) following the approach of Crane et al. [CWW13], using our operator (a) and Alexa and Wardetzky’s (b, using λ = 0.5). Images (c–f) show results using different triangulation strategies: refining the mesh by inserting the virtual point (c), triangulating polygons to maximize the minimum angle (d), and triangulating polygons to minimize squared triangle areas (e). To improve the results of (e), we employ the Laplacian based on the intrinsic Delaunay triangulation [BS07] (f). The triangles of (d) are already Delaunay, therefore using the intrinsic triangulation does not further improve the result in (d). 5.5. Geodesics in Heat In [CWW13], Crane et al. proposed the heat method for computing geodesic distances from a selected vertex vi to all others on the mesh. Let D be the divergence operator, G the gradient operator and ei ∈R|V c| the i-th unit vector. The geodesic distances are computed in four steps. First, the impulse function ei is diffused for a time-step of ε by solving for u ∈R|V| such that (M−εS) u = M·ei, (31) where we set ε to the squared mean of edge lengths as suggested in [CWW13]. Next, the gradients of u are computed and normalized to have unit length, resulting in a vector field g ∈R|T f |×3 with gi = −(G·u)i (G·u)i . (32) Finally, the geodesic distance, d ∈R|V| is computed by solving for the scalar field whose gradient best matches g S·d = D·g (33) and additively offsetting so that it has value zero at vi. Our poly-gon Laplacian offers a natural decomposition into divergence and gradient (c.f. Section 4.5). Crane et al. [CWW13] demonstrate how to obtain a normalized gradient for computing geodesic distances using the operator of Alexa and Wardetzky. We qualitatively compare the results using our operator, the one from [AW11], and several polygon triangulation strategies in Fig-ure 7. (a) Our construction gives a result with considerably fewer artifacts compared to the other approaches. (b) The Laplace op-erator of Alexa and Wardetzky fails independent of the choice of λ. (d) Triangulating polygons to maximize the minimal angle also gives good results, but this approach is not suitable for arbitrary meshes since it fails on non-convex polygons. (e) Minimum-area triangulations avoid this problem, but give worse results due to poor triangle shapes. (f) Combining minimum-area triangulations with the Laplacian based on the intrinsic Delaunay triangulation [BS07] fixes this problem, but is more complex to compute. In (c) we show the result obtained by using the cotangent Laplacian on the mesh explicitly refined by inserting the virtual vertices (this is equiva-lent to S f ). Our construction is clearly different from just refining polygons. The quality of geodesics is linked to the number of negative off-diagonal coefficients in the stiffness matrix. Analyzing the ratio of negative off-diagonal entries for Figure 7 confirms this correlation: ours [AW11] c) d) e) f) 11% 21% 17% 5% 15% 0% We consistently observe a significantly smaller number of negative off-diagonal coefficients in our operator as compared to [AW11]. c ⃝2020 The Author(s) Computer Graphics Forum c ⃝2020 The Eurographics Association and John Wiley & Sons Ltd. A. Bunge, P. Herholz, M. Kazhdan, M. Botsch / Polygon Laplacian Made Simple [AW11] Mesh Ours [BS07] λ = 0.1 λ = 0.5 λ = 1 QUADS 1 0.0265 0.0394 0.0179 0.0415 0.1206 QUADS 2 0.0356 0.0856 0.0403 0.0469 0.1233 L-SHAPED 0.0574 0.0736 0.4216 0.1115 1.6531 TETRIS 1 0.2183 0.1408 0.4521 0.2304 0.2483 TETRIS 2 0.0821 0.0665 0.4381 0.1177 0.1852 Table 4: Root-mean-square error of computed geodesic distances for the different planar tessellations shown in Figure 3. Mesh Affine Convex Centroid Abs. Area [AW11] [Lie03] HEXAGON 55 142 13 357 52 21 QUADS 171 573 50 1460 240 82 HEXAGON 472+25 483+25 476+26 469+25 477+26 77+8 QUADS 596+35 596+35 599+35 600+36 599+35 498+29 Table 5: Timing (in ms) for constructing the Laplace matrix (top) and solving the linear system (bottom). The latter is split into the time needed for Cholesky factorization and for back-substitution. Moreover, the optimal λ parameter for [AW11] has to be deter-mined manually for each mesh. Figure 8 shows another result of geodesics in heat. This example features highly anisotropic polygons that lead to severe distortions for [AW11], while our operator defines smooth geodesic distances. Table 4 provides a quantitative evaluation of geodesic distances, by compare them to Euclidean distances on different planar meshes (Figure 3). Our operator yields smaller root-mean-square errors for most models, including the geodesic distances computed via intrin-sic Delaunay triangulation [BS07,SSC19] (based on the implemen-tation provided in libigl [JP∗18]). 5.6. Timings We evaluated the computational costs of the different operators on the HEX SPHERE (16070 faces) and the FINE SPHERE (96408 faces) shown in Figure 2. All timings were measured on a standard workstation with a six-core Intel Xeon 3.6 GHz CPU; no experi-ment exploited multi-threading. We analyze our approach with the different virtual vertex options described in Section 4.2: centroid of polygon vertices, minimizing the sum of (absolute) triangle areas (Abs. Area), minimizing the sum of squared triangle areas using ei-ther affine weights or convex weights. We compare these methods to Alexa and Wardetzky [AW11] and to the minimum area polygon triangulation [Lie03]. The timings are given in Table 5. In terms of the construction time for the Laplace matrix, our ap-proach is faster than [AW11] on fine-resolution meshes and com-parable for coarser meshes. And, with the exception of centroid, it is also the fastest of the different virtual vertex choices, since the Newton optimization of Abs. Area (based on Eigen) and the QP solver of convex (based on CGAL) are computationally expensive. However, triangulating the mesh and constructing the cotangent Laplacian is faster than defining a polygon Laplacian. Also, since a vertex on the polygon mesh has at least as many face-adjacent neighbors as the same vertex on the triangulated mesh, the polygon Laplacians are less sparse, resulting in an increased solver time. 5.7. Comparison of virtual vertex choices As stated in Section 4.2, there are several options for computing the virtual vertices and their weights. We compare the performance of these alternative constructions, using both lumped and un-lumped mass matrices, in a number of applications. For geodesic distances (see Section 5.5), our proposed version using affine weights gives the overall best results (see Table 6), although the strictly con-vex weights yield a smaller error for meshes with non-star-shaped faces. For most applications, using the lumped mass matrix gives better results, as also shown in the additional experiments in the supplementary material. Balancing numerical performance with efficiency, we found the minimizer of the squared triangle areas through affine weights to be the best choice. 5.8. Convergence behavior We evaluate the convergence behavior of our Laplacian under mesh refinement by solving the Poisson equation ∆f = b with Dirichlet boundary conditions for the Franke test function [Fra79] f(x,y) = 3 4e−(9x−2)2+(9y−2)2 4 + 3 4e−(9x+1)2 49 −9y+1 10 + 1 2e−(9x−7)2+(9y−3)2 4 −1 5e−(9x−4)2−(9y−7)2 . Figure 9 depicts the decrease of the L2 error under mesh refinement for regular triangle, quad, and hex meshes. The identical slope in this log-log plot reveals that our operator inherits the quadratic con-vergence order of the triangle-based cotangent Laplacian. Summary While our evaluations do not demonstrate that our Laplace operator is superior to existing state-of-the-art methods, it does show that it is competitive. We perform slightly better than tri-angulation [Lie03]. This is likely because our Laplacian is denser. And, while the polygon Laplacian of [AW11] outperforms ours in some cases, its behavior depends on the choice of the parameter λ, which cannot be fixed so as to perform well in all cases. In contrast, our method is parameter-free. 6. Conclusion In this work we have proposed a novel polygon Laplacian, defined by first refining the polygon mesh to a triangle mesh and then coars-ening the cotangent Laplacian from the triangle mesh back to the original polygon mesh. The derived polygon Laplacian exhibits nu-merous desirable properties including sparsity, symmetry, negative semi-definiteness, linear precision, and consistency with the diver-gence and gradient operators, without suffering from an increase in the dimensionality of the linear system. We have evaluated our Laplacian against other state-of-the-art methods and have demon-strated that it performs competitively, providing efficient and high-quality solutions without requiring parameter tuning. c ⃝2020 The Author(s) Computer Graphics Forum c ⃝2020 The Eurographics Association and John Wiley & Sons Ltd. A. Bunge, P. Herholz, M. Kazhdan, M. Botsch / Polygon Laplacian Made Simple a) b) ours c) [AW11] Figure 8: Geodesic distances computed on a quad mesh (a) with our Laplacian (b) and the Laplacian proposed in [AW11] (c). Even with the manually determined best weight parameter (λ = 0.4), (c) exhibits distortions while ours remains stable despite highly anisotropic faces. un-lumped mass matrix lumped mass matrix Mesh Affine Convex Centroid Abs.Area Affine Convex Centroid Abs.Area QUADS 1 0.025 0.025 0.025 0.025 0.026 0.027 0.027 0.027 QUADS 2 0.031 0.031 0.031 0.031 0.036 0.036 0.036 0.036 L-SHAPED 0.141 0.165 0.134 0.134 0.057 0.063 0.068 0.068 TETRIS 1 0.465 0.490 0.490 0.491 0.218 0.181 0.186 0.185 TETRIS 2 0.406 0.346 0.151 0.153 0.082 0.089 0.089 0.088 Table 6: RMSE of geodesic distances for the planar meshes in Figure 3 computed with different choices of virtual vertices and using un-lumped and lumped mass matrices. Figure 9: L2 convergence plots for the solution of the Poisson equa-tion on triangle, quad, and hex meshes of increasing resolution. In the future, we would like to extend our approach in several ways. We would like to consider introducing multiple virtual ver-tices within a single face as this could allow for better triangulations of non-convex polygons. We would like to consider loosening the restriction that the virtual vertex needs to be an affine combination of the polygon vertices. For example, in the case of a planar poly-gonization of the sphere, this would allow us to introduce vertices on the surface of the sphere, not just on the planes containing the polygons. Finally, we would like to extend our approach to higher-order basis functions and volumetric polyhedral meshes. Acknowledgments We thank the anonymous reviewers for their valuable comments and suggestions, and Marc Alexa and Fabian Prada for helpful discussions on Laplacians and discrete exterior calculus. This work was sponsored in part by NSF Award 1422325. Appendix A: Finding the optimal virtual point Given a polygon with n vertices (x1,...,xn), we want to insert a point x = x(w), defined as an affine combination x(w) = ∑n j=1 wjxj with ∑j wj = 1, such that the sum of squared triangle areas over the resulting triangle fan is minimized. This leads to the following optimization problem in the weights w = (w1,...,wn)T: min w n ∑ i=1 area xi,xi+1, n ∑ j=1 wjx j !2 s.t. n ∑ j=1 wj = 1. (34) The objective function can be rewritten as E(w) = n ∑ i=1 1 2 ∥(xi −xi+1)×(x(w)−xi)∥2 . (35) Since E is quadratic in x(w) and therefore also quadratic in w, it can be written as E(w) = 1 2wTAw+bTw+c. (36) Minimizing with respect to w, i.e., setting ∂E ∂w to zero, leads to Aw = −b with Ai j = 2 n ∑ k=1 xj ×(xk+1 −xk)  ·(xi ×(xk+1 −xk)), bi = 2 n ∑ k=1 (xi ×(xk+1 −xk))·((xk+1 −xk)×xk) (37) We add one row to the matrix to enforce the partition of unity con-straint ∑j wj = 1, turning it into the (n+1)×n linear system  A 1···1  w = −b 1  . (38) c ⃝2020 The Author(s) Computer Graphics Forum c ⃝2020 The Eurographics Association and John Wiley & Sons Ltd. A. Bunge, P. Herholz, M. Kazhdan, M. Botsch / Polygon Laplacian Made Simple The matrix A has rank 2 or 3 for planar or non-planar polygons, respectively. Hence, the system in Equation (38) has rank 3 or 4 for planar/non-planar polygons. It is therefore fully determined for ei-ther (planar) triangles or non-planar quads, and is underdetermined otherwise. In the latter case, we aim for the least-norm solution, i.e., the solution w with minimal ∥w∥, because it distributes the influence of polygon vertices xi equally. We handle both the fully-determined and the under-determined cases in a robust and unified manner through the matrix pseudo-inverse [GL89], which we com-pute through Eigen’s complete orthogonal decomposition [GJ∗10]. References [Abb84] ABBOT E. A.: Flatland: A Romance of Many Dimensions. See-ley & Co., 1884. 1 [AFW06] ARNOLD D. N., FALK R. S., WINTHER R.: Finite element exterior calculus, homological techniques, and applications. Acta Nu-merica 15 (2006), 1–155. 5 [AW11] ALEXA M., WARDETZKY M.: Discrete Laplacians on general polygonal meshes. ACM Transactions on Graphics 30, 4 (2011), 102:1– 102:10. 1, 2, 4, 5, 6, 7, 8, 9, 10 [BKP∗10] BOTSCH M., KOBBELT L., PAULY M., ALLIEZ P., LEVY B.: Polygon Mesh Processing. AK Peters, 2010. 2 [BS07] BOBENKO A. I., SPRINGBORN B. A.: A discrete Laplace– Beltrami operator for simplicial surfaces. Discrete & Computational Geometry 38, 4 (2007), 740–756. 2, 8, 9 [BSW08] BELKIN M., SUN J., WANG Y.: Discrete Laplace operator on meshed surfaces. In Proceedings of Symposium on Computational Geometry (2008), pp. 278–287. 2 [CLB∗09] CHUANG M., LUO L., BROWN B. J., RUSINKIEWICZ S., KAZHDAN M.: Estimating the Laplace-Beltrami operator by restrict-ing 3d functions. Computer Graphics Forum 28, 5 (2009), 1475–1484. 2 [CRK16] CHUANG M., RUSINKIEWICZ S., KAZHDAN M.: Gradient-domain processing of meshes. Journal of Computer Graphics Tech-niques 5, 4 (2016), 44–55. 2 [CWW13] CRANE K., WEISCHEDEL C., WARDETZKY M.: Geodesics in heat: A new approach to computing distance based on heat flow. ACM Transactions on Graphics 32, 5 (2013), 152:1–152:11. 2, 8 [dGDMD16] DE GOES F., DESBRUN M., MEYER M., DEROSE T.: Sub-division exterior calculus for geometry processing. ACM Transactions on Graphics 35, 4 (2016), 133:1–133:11. 2 [DMA02] DESBRUN M., MEYER M., ALLIEZ P.: Intrinsic parameter-izations of surface meshes. Computer Graphics Forum 21, 3 (2002), 209–218. 2 [DMSB99] DESBRUN M., MEYER M., SCHRÖDER P., BARR A. H.: Im-plicit fairing of irregular meshes using diffusion and curvature flow. In Proceedings of ACM SIGGRAPH (1999), pp. 317–324. 1, 6 [Dzi88] DZIUK G.: Finite Elements for the Beltrami operator on arbi-trary surfaces. Springer Berlin Heidelberg, 1988, pp. 142–155. 1 [Fle84] FLETCHER C.: Computational Galerkin Methods. Computa-tional Physics Series. Springer-Verlag, 1984. 2 [Flo97] FLOATER M. S.: Parametrization and smooth approximation of surface triangulations. Computer Aided Geometric Design 14, 3 (1997), 231–250. 2 [Fra79] FRANKE R.: A critical comparison of some methods for inter-polation of scattered data. Tech. rep., Naval Postgraduate School, 1979. 9 [GGT06] GORTLER S. J., GOTSMAN C., THURSTON D.: Discrete one-forms on meshes and applications to 3d mesh parameterization. Comput. Aided Geom. Des. 23, 2 (2006), 83–112. 2 [GJ∗10] GUENNEBAUD G., JACOB B., ET AL.: Eigen v3. 2010. 11 [GL89] GOLUB G. H., LOAN C. F. V.: Matrix Computations. Johns Hopkins University Press, Baltimore, 1989. 11 [JP∗18] JACOBSON A., PANOZZO D., ET AL.: libigl: A simple C++ geometry processing library, 2018. 9 [KSBC12] KAZHDAN M., SOLOMON J., BEN-CHEN M.: Can mean-curvature flow be modified to be non-singular? Computer Graphics Fo-rum 31, 5 (2012), 1745–1754. 2, 6, 7 [Lie03] LIEPA P.: Filling holes in meshes. In Proceedings of Euro-graphics/ACM SIGGRAPH Symposium on Geometry Processing (2003), pp. 200–205. 6, 7, 9 [LZ10] LÉVY B., ZHANG H. R.: Spectral mesh processing. In ACM SIGGRAPH 2010 Courses (2010), pp. 8:1–8:312. 2 [Mac49] MACNEAL R.: The Solution of Partial Differential Equations by Means of Electrical Networks. PhD thesis, California Institute of Technology, 1949. 1 [MTAD08] MULLEN P., TONG Y., ALLIEZ P., DESBRUN M.: Spectral conformal parameterization. Computer Graphics Forum 27, 5 (2008), 1487–1494. 7 [PP93] PINKALL U., POLTHIER K.: Computing discrete minimal sur-faces and their conjugates. Experim. Math. 2 (1993), 15–36. 1 [SB19] SIEGER D., BOTSCH M.: The polygon mesh processing library, 2019. 5 [SCOL∗04] SORKINE O., COHEN-OR D., LIPMAN Y., ALEXA M., RÖSSL C., SEIDEL H.-P.: Laplacian surface editing. In Proceedings of Eurographics Symposium on Geometry Processing (2004), pp. 179– 188. 2 [SSC19] SHARP N., SOLIMAN Y., CRANE K.: Navigating intrinsic tri-angulations. ACM Transactions on Graphics 38, 4 (2019). 2, 9 [Tau95] TAUBIN G.: A signal processing approach to fair surface design. In Proceedings of ACM SIGGRAPH (1995), pp. 351–358. 2 [VL08] VALLET B., LÉVY B.: Spectral geometry processing with mani-fold harmonics. Computer Graphics Forum 27, 2 (2008), 251–260. 7 [Whi57] WHITNEY H.: Geometric Integration Theory. Princeton Uni-versity Press, 1957. 5 [WMKG07] WARDETZKY M., MATHUR S., KÄLBERER F., GRINSPUN E.: Discrete Laplace operators: No free lunch. In Proceedings of Eu-rographics Symposium on Geometry Processing (2007), pp. 33–37. 2, 4 [XLH11] XIONG Y., LI G., HAN G.: Mean Laplace-Beltrami opera-tor for quadrilateral meshes. In Transactions on Edutainment V, Pan Z., Cheok A. D., Müller W., (Eds.). Springer Berlin Heidelberg, 2011, pp. 189–201. 2 [Zha04] ZHANG H.: Discrete combinatorial Laplacian operators for dig-ital geometry processing. In Proceedings of SIAM Conference on Geo-metric Design and Computing (2004), pp. 575–592. 2 c ⃝2020 The Author(s) Computer Graphics Forum c ⃝2020 The Eurographics Association and John Wiley & Sons Ltd.
189377
https://proofwiki.org/wiki/Definition:Argand_Diagram
Definition:Argand Diagram From ProofWiki Jump to navigation Jump to search Contents 1 Definition 2 Examples 2.1 Example: $-3 + 4 i$ 3 Also see 4 Source of Name 5 Historical Note 6 Sources Definition An Argand diagram is a graphical representation of a set of complex numbers on the complex plane: A complex number can be represented on an Argand diagram as a vector from the origin $O = \tuple {0, 0}$ to the point $P = \tuple {x, y}$ whose coordinates are the complex number $x + i y$. Examples Example: $-3 + 4 i$ The following is an Argand diagram of the complex number $-3 + 4 i$: Also see Definition:Complex Plane, also known as the Argand plane or Gaussian plane, and so on. $\mathsf{Pr} \infty \mathsf{fWiki}$ makes the distinction between the diagram itself and the complex plane in which it is embedded. Results about Argand diagrams can be found here. Source of Name This entry was named for Jean-Robert Argand. Historical Note The Argand diagram appears in Jean-Robert Argand's self-published $1806$ work Essai sur une manière de représenter les quantités imaginaires dans les constructions géométriques (Essay on a method of representing imaginary quantities by geometric constructions). This would have passed unnoticed by the mathematical community except that Legendre received a copy. He had no idea who had published it (as Argand had failed to include his name anywhere in it). Legendre passed it to François Français. His brother Jacques Français found it in his papers after his death in $1810$, and published it in $1813$ in the journal Annales de mathématiques pures et appliquées, announcing it as by an unknown mathematician. He appealed for the author of the work to make himself known, which Argand did, submitting a slightly modified version for publication, again in Annales de mathématiques pures et appliquées. By this time, however, Carl Friedrich Gauss had already himself invented the same concept. It must be noted that this concept had in fact been invented by Caspar Wessel as early as $1787$, and been published in the paper Om directionens analytiske betegning by the Danish academy in $1799$. This paper was rediscovered in $1895$ by Sophus Juel, and later that year Sophus Lie republished it. Wessel's precedence is now universally recognised, but the term Argand diagram has stuck. Sources 1957: E.G. Phillips: Functions of a Complex Variable (8th ed.) ... (previous) ... (next): Chapter $\text I$: Functions of a Complex Variable: $\S 3$. Geometric Representation of Complex Numbers 1960: Walter Ledermann: Complex Numbers ... (previous) ... (next): $\S 2$. Geometrical Representations 1968: Murray R. Spiegel: Mathematical Handbook of Formulas and Tables ... (previous) ... (next): $\S 6$: Complex Numbers: Graph of a Complex Number 1981: Murray R. Spiegel: Theory and Problems of Complex Variables (SI ed.) ... (previous) ... (next): $1$: Complex Numbers: Graphical Representation of Complex Numbers 1989: Ephraim J. Borowski and Jonathan M. Borwein: Dictionary of Mathematics ... (previous) ... (next): Argand diagram or Gaussian plane 1998: David Nelson: The Penguin Dictionary of Mathematics (2nd ed.) ... (previous) ... (next): Argand diagram (complex plane) 1998: David Nelson: The Penguin Dictionary of Mathematics (2nd ed.) ... (previous) ... (next): complex number 2008: David Nelson: The Penguin Dictionary of Mathematics (4th ed.) ... (previous) ... (next): Argand diagram (complex plane) 2008: David Nelson: The Penguin Dictionary of Mathematics (4th ed.) ... (previous) ... (next): complex number 2009: Murray R. Spiegel, Seymour Lipschutz and John Liu: Mathematical Handbook of Formulas and Tables (3rd ed.) ... (previous) ... (next): $\S 4$: Complex Numbers: Complex Plane 2014: Christopher Clapham and James Nicholson: The Concise Oxford Dictionary of Mathematics (5th ed.) ... (previous) ... (next): Argand diagram 2021: Richard Earl and James Nicholson: The Concise Oxford Dictionary of Mathematics (6th ed.) ... (previous) ... (next): Argand diagram Retrieved from " Categories: Definitions/Named Definitions/Argand Definitions/Argand Diagrams Definitions/Complex Plane Definitions/Diagrams Navigation menu Search
189378
http://www.wallace.ccfaculty.org/book/1.7%20Variation%20Practice.pdf
1.7 Practice - Variation Write the formula that expresses the relationship described 1. c varies directly as a 2. x is jointly proportional to y and z 3. w varies inversely as x 4. r varies directly as the square of s 5. f varies jointly as x and y 6. j is inversely proportional to the cube of m 7. h is directly proportional to b 8. x is jointly proportional with the square of a and the square root of b 9. a is inversely proportional to b Find the constant of variation and write the formula to express the relationship using that constant 10. a varies directly as b and a = 15 when b = 5 11. p is jointly proportional to q and r and p = 12 when q = 8 and r = 3 12. c varies inversely as d and c = 7 when d = 4 13. t varies directly as the square of u and t = 6 when u = 3 14. e varies jointly as f and g and e = 24 when f = 3 and g = 2 15. w is inversely proportional to the cube of x and w is 54 when x = 3 16. h is directly proportional to j and h = 12 when j = 8 17. a is jointly proportional with the square of x and the square root of y and a = 25 when x = 5 and y = 9 18. m is inversely proportional to n and m = 1.8 when n = 2.1 Solve each of the following variation problems by setting up a formula to express the relationship, finding the constant, and then answering the question. 1 19. The electrical current, in amperes, in a circuit varies directly as the voltage. When 15 volts are applied, the current is 5 amperes. What is the current when 18 volts are applied? 20. The current in an electrical conductor varies inversely as the resistance of the conductor. If the current is 12 ampere when the resistance is 240 ohms, what is the current when the resistance is 540 ohms? 21. Hooke’s law states that the distance that a spring is stretched by hanging object varies directly as the mass of the object. If the distance is 20 cm when the mass is 3 kg, what is the distance when the mass is 5 kg? 22. The volume of a gas varies inversely as the pressure upon it. The volume of a gas is 200 cm3 under a pressure of 32 kg/cm2. What will be its volume under a pressure of 40 kg/cm2? 23. The number of aluminum cans used each year varies directly as the number of people using the cans. If 250 people use 60,000 cans in one year, how many cans are used each year in Dallas, which has a population of 1,008,000? 24. The time required to do a job varies inversely as the number of peopel working. It takes 5hr for 7 bricklayers to build a park well. How long will it take 10 bricklayers to complete the job? 25. According to Fidelity Investment Vision Magazine, the average weekly allowance of children varies directly as their grade level. In a recent year, the average allowance of a 9th-grade student was 9.66 dollars per week. What was the average weekly allowance of a 4th-grade student? 26. The wavelength of a radio wave varies inversely as its frequency. A wave with a frequency of 1200 kilohertz has a length of 300 meters. What is the length of a wave with a frequency of 800 kilohertz? 27. The number of kilograms of water in a human body varies directly as the mass of the body. A 96-kg person contains 64 kg of water. How many kilo grams of water are in a 60-kg person? 28. The time required to drive a fixed distance varies inversely as the speed. It takes 5 hr at a speed of 80 km/h to drive a fixed distance. How long will it take to drive the same distance at a speed of 70 km/h? 29. The weight of an object on Mars varies directly as its weight on Earth. A person weighs 95lb on Earth weighs 38 lb on Mars. How much would a 100-lb person weigh on Mars? 30. At a constant temperature, the volume of a gas varies inversely as the pres-sure. If the pressure of a certain gas is 40 newtons per square meter when the volume is 600 cubic meters what will the pressure be when the volume is 2 reduced by 240 cubic meters? 31. The time required to empty a tank varies inversely as the rate of pumping. If a pump can empty a tank in 45 min at the rate of 600 kL/min, how long will it take the pump to empty the same tank at the rate of 1000 kL/min? 32. The weight of an object varies inversely as the square of the distance from the center of the earth. At sea level (6400 km from the center of the earth), an astronaut weighs 100 lb. How far above the earth must the astronaut be in order to weigh 64 lb? 33. The stopping distance of a car after the brakes have been applied varies directly as the square of the speed r. If a car, traveling 60 mph can stop in 200 ft, how fast can a car go and still stop in 72 ft? 34. The drag force on a boat varies jointly as the wetted surface area and the square of the velocity of a boat. If a boat going 6.5 mph experiences a drag force of 86 N when the wetted surface area is 41.2 ft2, how fast must a boat with 28.5 ft2 of wetted surface area go in order to experience a drag force of 94N? 35. The intensity of a light from a light bulb varies inversely as the square of the distance from the bulb. Suppose intensity is 90 W/m2 (watts per square meter) when the distance is 5 m. How much further would it be to a point where the intesity is 40 W/m2? 36. The volume of a cone varies jointly as its height, and the square of its radius. If a cone with a height of 8 centimeters and a radius of 2 centimeters has a volume of 33.5 cm3, what is the volume of a cone with a height of 6 centime-ters and a radius of 4 centimeters? 37. The intensity of a television signal varies inversely as the square of the dis-tance from the transmitter. If the intensity is 25 W/m2 at a distance of 2 km, how far from the trasmitter are you when the intensity is 2.56 W/m2? 38. The intensity of illumination falling on a surface from a given source of light is inversely proportional to the square of the distance from the source of light. The unit for measuring the intesity of illumination is usually the footcandle. If a given source of light gives an illumination of 1 foot-candle at a distance of 10 feet, what would the illumination be from the same source at a distance of 20 feet? Beginning and Intermediate Algebra by Tyler Wallace is licensed under a Creative Commons Attribution 3.0 Unported License. ( 3 1.7 Answers - Variation 1) c a = k 2) x yz = k 3) wx = k 4) r s2 = k 5) f xy = k 6) jm3 = k 7) h b = k 8) x a b 2 √= k 9) ab = k 10) a b = 3 11) P rq = 0.5 12) cd = 28 13) t u2 = 0.67 14) e fg = 4 15) wx3 = 1458 16) h j = 1.5 17) a x y 2 √= 0.33 18) mn = 3.78 19) 6 k 20) 5.3 k 21) 33.3 cm 22) 160 kg/cm3 23) 241,920,000 cans 24) 3.5 hours 25) 4.29 dollars 26) 450 m 27) 40 kg 28) 5.7 hr 29) 40 lb 30) 100 N 31) 27 min 32) 1600 km 33) r = 36 34) 8.2 mph 35) 2.5 m 36) V = 100.5 cm3 37) 6.25 km 38) I = 0.25 Beginning and Intermediate Algebra by Tyler Wallace is licensed under a Creative Commons Attribution 3.0 Unported License. ( 4
189379
https://math.libretexts.org/Bookshelves/Algebra/College_Algebra_1e_(OpenStax)/05%3A_Polynomial_and_Rational_Functions/509%3A_Modeling_Using_Variation
Skip to main content 5.9: Modeling Using Variation Last updated : Oct 6, 2021 Save as PDF 5.8: Inverses and Radical Functions 6: Exponential and Logarithmic Functions Page ID : 15074 OpenStax OpenStax ( \newcommand{\kernel}{\mathrm{null}\,}) Learning Objectives In this section, you will: Solve direct variation problems. Solve inverse variation problems. Solve problems involving joint variation. A used-car company has just offered their best candidate, Nicole, a position in sales. The position offers 16% commission on her sales. Her earnings depend on the amount of her sales. For instance, if she sells a vehicle for 736. She wants to evaluate the offer, but she is not sure how. In this section, we will look at relationships, such as this one, between earnings, sales, and commission rate. Solving Direct Variation Problems In the example above, Nicole’s earnings can be found by multiplying her sales by her commission. The formula tells us her earnings, , come from the product of 0.16, her commission, and the sale price of the vehicle. If we create a table, we observe that as the sales price increases, the earnings increase as well, which should be intuitive. See Table 5.8.1. | , sales price | | Interpretation | --- | $9,200 | | A sale of a 1472 earnings. | | $4,600 | | A sale of a 736 earnings. | | $18,400 | | A sale of a 2944 earnings. | Table 5.8.1 Notice that earnings are a multiple of sales. As sales increase, earnings increase in a predictable way. Double the sales of the vehicle from 9,200, and we double the earnings from 1,472. As the input increases, the output increases as a multiple of the input. A relationship in which one quantity is a constant multiplied by another quantity is called direct variation. Each variable in this type of relationship varies directly with the other. Figure 5.8.1 represents the data for Nicole’s potential earnings. We say that earnings vary directly with the sales price of the car. The formula is used for direct variation. The value is a nonzero constant greater than zero and is called the constant of variation. In this case, and . We saw functions like this one when we discussed power functions. Figure has invalid source: image visible until saved.... src="/@api/deki/pages/=Bookshelves%252FPrecalculus%252FBook%25253A_Precalculus_(OpenStax)%252F03%25253A_Polynomial_and_Rational_Functions%252F3.9%25253A_Modeling_Using_Variation/files/CNX_Precalc_Figure_03_09_001.jpg A General Note: DIRECT VARIATION If and are related by an equation of the form then we say that the relationship is direct variation and varies directly with, or is proportional to, the th power of . In direct variation relationships, there is a nonzero constant ratio , where is called the constant of variation, which help defines the relationship between the variables. Given a description of a direct variation problem, solve for an unknown. Identify the input, ,and the output, . Determine the constant of variation. You may need to divide by the specified power of to determine the constant of variation. Use the constant of variation to write an equation for the relationship. Substitute known values into the equation to find the unknown. Example Solving a Direct Variation Problem The quantity varies directly with the cube of . If when , find when is . Solution The general formula for direct variation with a cube is . The constant can be found by dividing by the cube of . Now use the constant to write an equation that represents this relationship. Substitute and solve for . Analysis The graph of this equation is a simple cubic, as shown in Figure 5.8.2. Q&A Do the graphs of all direct variation equations look like Example? No. Direct variation equations are power functions—they may be linear, quadratic, cubic, quartic, radical, etc. But all of the graphs pass through . Exercise The quantity varies directly with the square of . If when , find when is 4. Solution Solving Inverse Variation Problems Water temperature in an ocean varies inversely to the water’s depth. The formula gives us the temperature in degrees Fahrenheit at a depth in feet below Earth’s surface. Consider the Atlantic Ocean, which covers 22% of Earth’s surface. At a certain location, at the depth of 500 feet, the temperature may be 28°F. If we create Table 5.8.2, we observe that, as the depth increases, the water temperature decreases. | , depth | | Interpretation | --- | 500 ft | | At a depth of 500 ft, the water temperature is 28° F. | | 1000 ft | | At a depth of 1,000 ft, the water temperature is 14° F. | | 2000 ft | | At a depth of 2,000 ft, the water temperature is 7° F. | Table 5.8.2 We notice in the relationship between these variables that, as one quantity increases, the other decreases. The two quantities are said to be inversely proportional and each term varies inversely with the other. Inversely proportional relationships are also called inverse variations. For our example, Figure 5.8.3 depicts the inverse variation. We say the water temperature varies inversely with the depth of the water because, as the depth increases, the temperature decreases. The formula for inverse variation in this case uses . A General Note: INVERSE VARIATION If and are related by an equation of the form where is a nonzero constant, then we say that varies inversely with the th power of . In inversely proportional relationships, or inverse variations, there is a constant multiple . Example Writing a Formula for an Inversely Proportional Relationship A tourist plans to drive 100 miles. Find a formula for the time the trip will take as a function of the speed the tourist drives. Solution Recall that multiplying speed by time gives distance. If we let represent the drive time in hours, and represent the velocity (speed or rate) at which the tourist drives, then distance. Because the distance is fixed at 100 miles, so . Because time is a function of velocity, we can write . We can see that the constant of variation is 100 and, although we can write the relationship using the negative exponent, it is more common to see it written as a fraction. We say that time varies inversely with velocity. Given a description of an indirect variation problem, solve for an unknown. Identify the input, , and the output, . Determine the constant of variation. You may need to multiply by the specified power of to determine the constant of variation. Use the constant of variation to write an equation for the relationship. Substitute known values into the equation to find the unknown. Example Solving an Inverse Variation Problem A quantity varies inversely with the cube of . If when , find when is . Solution The general formula for inverse variation with a cube is . The constant can be found by multiplying by the cube of . Now we use the constant to write an equation that represents this relationship. , Substitute and solve for . Analysis The graph of this equation is a rational function, as shown in Figure 5.8.4. Exercise A quantity varies inversely with the square of . If when , find when is . Solution Solving Problems Involving Joint Variation Many situations are more complicated than a basic direct variation or inverse variation model. One variable often depends on multiple other variables. When a variable is dependent on the product or quotient of two or more variables, this is called joint variation. For example, the cost of busing students for each school trip varies with the number of students attending and the distance from the school. The variable ,cost, varies jointly with the number of students, ,and the distance, . A General Note: JOINT VARIATION Joint variation occurs when a variable varies directly or inversely with multiple variables. For instance, if varies directly with both and , we have . If varies directly with and inversely with ,we have . Notice that we only use one constant in a joint variation equation. Example Solving Problems Involving Joint Variation A quantity varies directly with the square of and inversely with the cube root of . If when and , find when and . Solution Begin by writing an equation to show the relationship between the variables. Substitute , , and to find the value of the constant . Now we can substitute the value of the constant into the equation for the relationship. To find when and , we will substitute values for and into our equation. Exercise A quantity varies directly with the square of and inversely with . If when and , find when and . Solution Access these online resources for additional instruction and practice with direct and inverse variation. Direct Variation Inverse Variation Direct and Inverse Variation Visit this website for additional practice questions from Learningpod. Key Equations | | | --- | | Direct variation | , is a nonzero constant. | | Inverse variation | , is a nonzero constant. | Key Concepts A relationship where one quantity is a constant multiplied by another quantity is called direct variation. See Example. Two variables that are directly proportional to one another will have a constant ratio. A relationship where one quantity is a constant divided by another quantity is called inverse variation. See Example. Two variables that are inversely proportional to one another will have a constant multiple. See Example. In many problems, a variable varies directly or inversely with multiple variables. We call this type of relationship joint variation. See Example. 5.8: Inverses and Radical Functions 6: Exponential and Logarithmic Functions
189380
https://www.scribd.com/document/61948241/Viktor-v-Prasolov-Problems-in-Plane-and-Solid-Geometry-Vol-2-Solid-Geometry-239p
Opens in a new window Opens an external website Opens an external website in a new window This website utilizes technologies such as cookies to enable essential site functionality, as well as for analytics, personalization, and targeted advertising. To learn more, view the following link: Privacy Policy Open navigation menu Upload 100%(4)100% found this document useful (4 votes) 10K views239 pages Viktor v. Prasolov - Problems in Plane and Solid Geometry (Vol. 2, Solid Geometry) - 239p This document discusses lines and planes in space. It covers topics such as: - Angles and distances between skew lines in a cube - Angles between lines and planes - Lines forming equal angl… Uploaded by vitorres You are on page 1/ 239 CHAPTER 1. LINES AND PLANES IN SPACE Angles and distances between skew lines1.1 . Given cube 1 1 1 with side . Find the angle and the distancebetween lines 1 B 1 . Given cube with side 1. Find the angle and the distance between skewdiagonals of two of its neighbouring faces. . Let and be the midpoints of edges , A 1 B 1 and CC 1 of the cube ABCDA 1 B 1 C 1 D 1 . Prove that triangle KLM is an equilateral one and its centercoincides with the center of the cube. . Given cube ABCDA 1 B 1 C 1 D 1 with side 1, let K be the midpoint of edge 1 . Find the angle and the distance between lines CK and A 1 D . . Edge CD of tetrahedron is perpendicular to plane ABC M is themidpoint of , is the midpoint of and point K divides edge CD in relation KD = 1 : 2. Prove that line CN is equidistant from lines and BK . . Find the distance between two skew medians of the faces of a regulartetrahedron with edge 1. (Investigate all the possible positions of medians.) § Angles between lines and planes1.7 . A plane is given by equation + + = 0 . Prove that vector ( ) is perpendicular to this plane. 1.8 . Find the cosine of the angle between vectors with coordinates ( a 1 ,b 1 ,c 1 )and ( a 2 ,b 2 ,c 2 ). 1.9 . In rectangular parallelepiped ABCDA 1 B 1 C 1 D 1 the lengths of edges areknown: AB = a , AD = b , AA 1 = c .a) Find the angle between planes BB 1 D and ABC 1 .b) Find the angle between planes AB 1 D 1 and A 1 C 1 D .c) Find the angle between line BD 1 and plane A 1 BD . 1.10 . The base of a regular triangular prism is triangle ABC with side a . Onthe lateral edges points A 1 , B 1 and C 1 are taken so that the distances from themto the plane of the base are equal to 12 a , a and 32 a , respectively. Find the anglebetween planes ABC and A 1 B 1 C 1 . Typeset by AMS -TEX1 Download to read ad-free 2 CHAPTER 1. LINES AND PLANES IN SPACE § Lines forming equal angles with lines and with planes1.11 . Line l constitutes equal angles with two intersecting lines l 1 and l 2 and isnot perpendicular to plane Π that contains these lines. Prove that the projectionof l to plane Π also constitutes equal angles with lines l 1 and l 2 . 1.12 . Prove that line l forms equal angles with two intersecting lines if and onlyif it is perpendicular to one of the two bisectors of the angles between these lines. 1.13 . Given two skew lines l 1 and l 2 ; points O 1 and A 1 are taken on l 1 ; points O 2 and A 2 are taken on l 2 so that O 1 O 2 is the common perpendicular to lines l 1 and l 2 and line A 1 A 2 forms equal angles with linels l 1 and l 2 . Prove that O 1 A 1 = O 2 A 2 . 1.14 . Points A 1 and A 2 belong to planes Π 1 and Π 2 , respectively, and line l isthe intersection line of Π 1 and Π 2 . Prove that line A 1 A 2 forms equal angles withplanes Π 1 and Π 2 if and only if points A 1 and A 2 are equidistant from line l . 1.15 . Prove that the line forming pairwise equal angles with three pairwiseintersecting lines that lie in plane Π is perpendicular to Π. 1.16 . Given three lines non-parallel to one plane prove that there exists a lineforming equal angles with them; moreover, through any point one can draw exactlyfour such lines. § Skew lines1.17 . Given two skew lines prove that there exists a unique segment perpendic-ular to them and with the endpoints on these lines. 1.18 . In space, there are given two skew lines l 1 and l 2 and point O not on anyof them. Does there always exist a line passing through O and intersecting bothgiven lines? Can there be two such lines? 1.19 . In space, there are given three pairwise skew lines. Prove that there existsa unique parallelepiped three edges of which lie on these lines. 1.20 . On the common perpendicular to skew lines p and q , a point, A , is taken.Along line p point M is moving and N is the projection of M to q . Prove that allthe planes AMN have a common line. § Pythagoras’s theorem in space1.21 . Line l constitutes angles α , β and γ with three pairwise perpendicularlines. Prove thatcos 2 α cos 2 β cos 2 γ = 1 . 1.22 . Plane angles at the vertex D of tetrahedron ABCD are right ones. Provethat the sum of squares of areas of the three rectangular faces of the tetrahedronis equal to the square of the area of face ABC . 1.23 . Inside a ball of radius R , consider point A at distance a from the centerof the ball. Through A three pairwise perpendicular chords are drawn.a) Find the sum of squares of lengths of these chords.b) Find the sum of squares of lengths of segments of chords into which point A divides them. 1.24 . Prove that the sum of squared lengths of the projections of the cube’sedges to any plane is equal to 8 a 2 , where a is the length of the cube’s edge. 1.25 . Consider a regular tetrahedron. Prove that the sum of squared lengths of the projections of the tetrahedron’s edges to any plane is equal to 4 a 2 , where a isthe length of an edge of the tetrahedron. Download to read ad-free PROBLEMS FOR INDEPENDENT STUDY 3 1.26 . Given a regular tetrahedron with edge a . Prove that the sum of squaredlengths of the projections (to any plane) of segments connecting the center of thetetrahedron with its vertices is equal to a 2 . § The coordinate method1.27 . Prove that the distance from the point with coordinates ( x 0 ,y 0 ,z 0 ) to theplane given by equation ax + by + cz + d = 0 is equal to | ax 0 + by 0 + cz 0 + d |√ a 2 + b 2 + c 2 . 1.28 . Given two points A and B and a positive number k  = 1 find the locus of points M such that AM : BM = k . 1.29 . Find the locus of points X such that pAX 2 + qBX 2 + rCX 2 = d, where A , B and C are given points, p , q , r and d are given numbers such that p + q + r = 0. 1.30 . Given two cones with equal angles between the axis and the generator.Let their axes be parallel. Prove that all the intersection points of the surfaces of these cones lie in one plane. 1.31 . Given cube ABCDA 1 B 1 C 1 D 1 with edge a , prove that the distance fromany point in space to one of the lines AA 1 , B 1 C 1 , CD is not shorter than a √ 2 . 1.32 . On three mutually perpendicular lines that intersect at point O , points A , B and C equidistant from O are fixed. Let l be an arbitrary line passing through O . Let points A 1 , B 1 and C 1 be symmetric through l to A , B and C , respectively.The planes passing through points A 1 , B 1 and C 1 perpendicularly to lines OA , OB and OC , respectively, intersect at point M . Find the locus of points M . Problems for independent study1.33 . Parallel lines l 1 and l 2 lie in two planes that intersect along line l . Provethat l 1  l . 1.34 . Given three pairwise skew lines. Prove that there exist infinitely manylines each of which intersects all the three of these lines. 1.35 . Triangles ABC and A 1 B 1 C 1 do not lie in one plane and lines AB and A 1 B 1 , AC and A 1 C 1 , BC and B 1 C 1 are pairwise skew.a) Prove that the intersection points of the indicated lines lie on one line.b) Prove that lines AA 1 , BB 1 and CC 1 either intersect at one point or areparallel. 1.36 . Given several lines in space so that any two of them intersect. Prove thateither all of them lie in one plane or all of them pass through one point. 1.37 . In rectangular parallelepiped ABCDA 1 B 1 C 1 D 1 diagonal AC 1 is perpen-dicular to plane A 1 BD . Prove that this paral1lelepiped is a cube. 1.38 . For which dispositions of a dihedral angle and a plane that intersects itwe get as a section an angle that is intersected along its bisector by the bisectorplane of the dihedral angle? 1.39 . Prove that the sum of angles that a line constituteswith two perpendicularplanes does not exceed 90 ◦ . Download to read ad-free 4 CHAPTER 1. LINES AND PLANES IN SPACE 1.40 . In a regular quadrangular pyramid the angle between a lateral edge andthe plane of its base is equal to the angle between a lateral edge and the plane of a lateral face that does not contain this edge. Find this angle. 1.41 . Through edge AA 1 of cube ABCDA 1 B 1 C 1 D 1 a plane that forms equalangles with lines BC and B 1 D is drawn. Find these angles. Solutions1.1 . It is easy to verify that triangle A 1 BD is an equilateral one. Moreover,point A is equidistant from its vertices. Therefore, its projection is the center of the triangle. Similarly, The projection maps point C 1 into the center of triangle A 1 BD . Therefore, lines A 1 B and AC 1 are perpendicular and the distance betweenthem is equal to the distance from the center of triangle A 1 BD to its side. Sinceall the sides of this triangle are equal to a √ 2, the distance in question is equal to a √ 6 . 1.2 . Let us consider diagonals AB 1 and BD of cube ABCDA 1 B 1 C 1 D 1 . Since B 1 D 1  BD , the angle between diagonals AB 1 and BD is equal to ∠ AB 1 D 1 . Buttriangle AB 1 D 1 is an equilateral one and, therefore, ∠ AB 1 D 1 = 60 ◦ .It is easy to verify that line BD is perpendicularto plane ACA 1 C 1 ; therefore,theprojection to the plane maps BD into the midpoint M of segment AC . Similarly,point B 1 is mapped under this projection into the midpoint N of segment A 1 C 1 .Therefore, the distance between lines AB 1 and BD is equal to the distance frompoint M to line AN .If the legs of a right triangle are equal to a and b and its hypothenuse is equal to c , then the distance from the vertex of the right angle to the hypothenuse is equalto abc . In right triangle AMN legs are equal to 1 and 1 √ 2 ; therefore, its hypothenuseis equal to  32 and the distance in question is equal to 1 √ 3 . 1.3 . Let O be the center of the cube. Then 2 { OK } = { C 1 D } , 2 { OL } = { DA 1 } and 2 { OM } = { A 1 C 1 } . Since triangle C 1 DA 1 is an equilateral one, triangle KLM is also an equilateral one and O is its center. 1.4 . First, let us calculate the value of the angle. Let M be the midpoint of edge BB 1 . Then A 1 M  KC and, therefore, the angle between lines CK and A 1 D is equal to angle MA 1 D . This angle can be computed with the help of the law of cosines, because A 1 D = √ 2, A 1 M = √ 5 2 and DM = 32 . After simple calculationswe get cos MA 1 D = 1 √ 10 .To computethe distancebetweenlines CK and A 1 D , let us take their projectionsto the plane passing through edges AB and C 1 D 1 . This projection sends line A 1 D into the midpoint O of segment AD 1 and points C and K into the midpoint Q of segment BC 1 and the midpoint P of segment OD 1 , respectively.The distance between lines CK and A 1 D is equal to the distance from point O to line PQ . Legs OP and OQ of right triangle OPQ are equal to 1 √ 8 and1, respectively. Therefore, the hypothenuse of this triangle is equal to 3 √ 8 . Therequired distance is equal to the product of the legs’ lengths divided by the lengthof the hypothenuse, i.e., it is equal to 13 . 1.5 . Consider the projection to the plane perpendicular to line CN . Denote by X 1 the projection of any point X . The distance from line CN to line AM (resp. BK ) is equal to the distance from point C 1 to line A 1 M 1 (resp. B 1 K 1 ). Clearly,triangle A 1 D 1 B 1 is an equilateral one, K 1 is the intersection point of its medians, Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Share this document Share on Facebook, opens a new window Share on LinkedIn, opens a new window Share with Email, opens mail client Millions of documents at your fingertips, ad-free Subscribe with a free trial You might also like Recon Smart Card Standard No ratings yet Recon Smart Card Standard 2 pages FP3 Vectors No ratings yet FP3 Vectors 8 pages Line and Plane DPP No ratings yet Line and Plane DPP 22 pages Advanced 3D Geometry Problems No ratings yet Advanced 3D Geometry Problems 5 pages Basics of Coordinate Geometry No ratings yet Basics of Coordinate Geometry 1 page Practice-Questionpaper 3DVectors Paper2 22oct2024 No ratings yet Practice-Questionpaper 3DVectors Paper2 22oct2024 2 pages Math Circle: Vector Applications No ratings yet Math Circle: Vector Applications 4 pages Vectors & 3D Geometry Problems No ratings yet Vectors & 3D Geometry Problems 7 pages CBSE XII HOTS Three Dimensional Geometry Chapter 11 No ratings yet CBSE XII HOTS Three Dimensional Geometry Chapter 11 3 pages Three Dimensional Geometry No ratings yet Three Dimensional Geometry 3 pages Engineering Math Assignment No ratings yet Engineering Math Assignment 4 pages SR Maths Revision - 3D GEOMETRY - Advanced - CPP (MNR) No ratings yet SR Maths Revision - 3D GEOMETRY - Advanced - CPP (MNR) 7 pages Problem Set Exercises No ratings yet Problem Set Exercises 23 pages 11.3 Exercise Short Answer (S.A.) 1.: Three Dimensional Geometry 235 No ratings yet 11.3 Exercise Short Answer (S.A.) 1.: Three Dimensional Geometry 235 6 pages 02B. 3D Geometry L-VI No ratings yet 02B. 3D Geometry L-VI 24 pages 3D Geometry No ratings yet 3D Geometry 21 pages Introduction To Three-Dimensional Geometry - Practice Sheet No ratings yet Introduction To Three-Dimensional Geometry - Practice Sheet 8 pages 12 Math 3D - 1 Test No ratings yet 12 Math 3D - 1 Test 3 pages Assignment No ratings yet Assignment 17 pages 3D Geometry Exam Solutions No ratings yet 3D Geometry Exam Solutions 19 pages 03 Lines Planes W No ratings yet 03 Lines Planes W 4 pages Hsslive-11. THREE DIMENSIONAL GEOMETRY PDF No ratings yet Hsslive-11. THREE DIMENSIONAL GEOMETRY PDF 7 pages Coordinate Geometry Med - QUESTIONS No ratings yet Coordinate Geometry Med - QUESTIONS 13 pages Plane Geometry No ratings yet Plane Geometry 7 pages 3-D Geometry-Miscellaneous Exercise No ratings yet 3-D Geometry-Miscellaneous Exercise 11 pages Straight Line 100% (1) Straight Line 8 pages Linear Algebra Worksheet I On Vectors, Lines, Planes in Space No ratings yet Linear Algebra Worksheet I On Vectors, Lines, Planes in Space 2 pages IIT-JEE 2013 3D Geometry Problems No ratings yet IIT-JEE 2013 3D Geometry Problems 8 pages Geometric Inequalities by Nicholas D. Kazarinoff No ratings yet Geometric Inequalities by Nicholas D. Kazarinoff 141 pages Ch27 Sum and Products of Roots No ratings yet Ch27 Sum and Products of Roots 34 pages Bmo SL 2021 No ratings yet Bmo SL 2021 57 pages Vectors Review No ratings yet Vectors Review 6 pages Altas Copco FD 230 PDF No ratings yet Altas Copco FD 230 PDF 16 pages Applied Mathematics I (Math 1041) Worksheet I No ratings yet Applied Mathematics I (Math 1041) Worksheet I 2 pages Functional Equations Challenges No ratings yet Functional Equations Challenges 2 pages Combinatorial Geometry PDF 0% (1) Combinatorial Geometry PDF 2 pages BCW Hall 2 No ratings yet BCW Hall 2 8 pages Prosman2 - Fluidity of Molten Metal No ratings yet Prosman2 - Fluidity of Molten Metal 22 pages Elmo 2023 SL No ratings yet Elmo 2023 SL 47 pages Advanced Topics in Inequalities - Franklyn Wang Et Al. - AoPS 2015 PDF No ratings yet Advanced Topics in Inequalities - Franklyn Wang Et Al. - AoPS 2015 PDF 13 pages Romanian Mathematical Competitions - 2008 No ratings yet Romanian Mathematical Competitions - 2008 69 pages China TST 24 No ratings yet China TST 24 4 pages A.S Level Biology Edexcel Notes Unit 1 Part 1 Color 2side No ratings yet A.S Level Biology Edexcel Notes Unit 1 Part 1 Color 2side 134 pages Air Dryer 100% (1) Air Dryer 21 pages Functional Equations Book No ratings yet Functional Equations Book 10 pages Mathematical Olympiad Challenges: Titu Andreescu Razvan Gelca 50% (2) Mathematical Olympiad Challenges: Titu Andreescu Razvan Gelca 2 pages 220 Algebra Problems by Jafari No ratings yet 220 Algebra Problems by Jafari 38 pages Retaining Wall Drawing No ratings yet Retaining Wall Drawing 1 page Philosophy, Scientific Knowledge, and Concept Formation in Guelincx and Descartes No ratings yet Philosophy, Scientific Knowledge, and Concept Formation in Guelincx and Descartes 460 pages Problems in Geometry PDF No ratings yet Problems in Geometry PDF 210 pages Elementary Number Theory Challenges 100% (1) Elementary Number Theory Challenges 88 pages Sample Test Hkimo Grade 3 (Vòng Sơ Lo I) : Part I: Logical Thinking 100% (1) Sample Test Hkimo Grade 3 (Vòng Sơ Lo I) : Part I: Logical Thinking 7 pages Making Salts No ratings yet Making Salts 29 pages IMO2012 Shortlisted Problems With Solutions No ratings yet IMO2012 Shortlisted Problems With Solutions 52 pages ONGC Spce Tube Product No ratings yet ONGC Spce Tube Product 2 pages Razavi Monolithic Phase-Locked Loops and Clock Recovery Circuits No ratings yet Razavi Monolithic Phase-Locked Loops and Clock Recovery Circuits 39 pages Phy Pract Mock No ratings yet Phy Pract Mock 9 pages Dokumen - Tips Basic Flowsheeting Principles Thermart Himmelblau D M and Riggs J B 2003 Basic No ratings yet Dokumen - Tips Basic Flowsheeting Principles Thermart Himmelblau D M and Riggs J B 2003 Basic 111 pages DR Aft: Introduction To Conics, Geometrically No ratings yet DR Aft: Introduction To Conics, Geometrically 8 pages MultiControl Supplement en V3.1 No ratings yet MultiControl Supplement en V3.1 152 pages Polynomial No ratings yet Polynomial 5 pages Midpalatal Miniscrew Insertion The Accuracy of Di No ratings yet Midpalatal Miniscrew Insertion The Accuracy of Di 7 pages Zumning Feng, Yi Sun - USA and International Mathematical Olympiads 2007-2008 - 76p PDF 100% (1) Zumning Feng, Yi Sun - USA and International Mathematical Olympiads 2007-2008 - 76p PDF 76 pages Directed Angles No ratings yet Directed Angles 7 pages Biology 0610 Paper 4 MS No ratings yet Biology 0610 Paper 4 MS 11 pages Geometric Inequalities - Bottema, Et. Al. (1968) PDF No ratings yet Geometric Inequalities - Bottema, Et. Al. (1968) PDF 157 pages Graph Theory: Adithya Bhaskar January 28, 2016 No ratings yet Graph Theory: Adithya Bhaskar January 28, 2016 5 pages Geometry Lemmas for Math Olympiads No ratings yet Geometry Lemmas for Math Olympiads 4 pages Holy City Audio Forum: Modulated Delay No ratings yet Holy City Audio Forum: Modulated Delay 3 pages Materials ISO IEC 17025 Annex Cement Testing No ratings yet Materials ISO IEC 17025 Annex Cement Testing 8 pages Whole-Body Vibration Therapy: An Overview No ratings yet Whole-Body Vibration Therapy: An Overview 6 pages Preparation PLM 11 No ratings yet Preparation PLM 11 18 pages Common SQL Errors & Solutions Guide No ratings yet Common SQL Errors & Solutions Guide 13 pages Functional Equations A Problem Solving Approach by B J Venkatachala Z Liborg PR 0% (1) Functional Equations A Problem Solving Approach by B J Venkatachala Z Liborg PR 274 pages Seminar On: 3D Printing No ratings yet Seminar On: 3D Printing 19 pages 102 Combinatorial Problems by Titu Andreescu & Zuming Feng No ratings yet 102 Combinatorial Problems by Titu Andreescu & Zuming Feng 148 pages OPC Penetration Testing in Oil PCS No ratings yet OPC Penetration Testing in Oil PCS 15 pages Grade 7/8 Carpentry Measurements No ratings yet Grade 7/8 Carpentry Measurements 14 pages 2008 Winter Camp - Peng Shi - Bashing Geometry With Complex Numbers PDF No ratings yet 2008 Winter Camp - Peng Shi - Bashing Geometry With Complex Numbers PDF 2 pages DT-10 Owner's Manual: Turning On The Power No ratings yet DT-10 Owner's Manual: Turning On The Power 3 pages Lemmas in Geometry No ratings yet Lemmas in Geometry 7 pages PM-0.5 MK: - Reference Manual No ratings yet PM-0.5 MK: - Reference Manual 7 pages Kathrein 80010430 PDF No ratings yet Kathrein 80010430 PDF 1 page 1LE2321-1CA11-4GA3 Datasheet en No ratings yet 1LE2321-1CA11-4GA3 Datasheet en 1 page Evo Series No ratings yet Evo Series 2 pages LTE Challenge Problems: Amir Hossein Parvardi December 2017 100% (1) LTE Challenge Problems: Amir Hossein Parvardi December 2017 7 pages Outokumpu Stainless Steel Bar Sizes and Specifications No ratings yet Outokumpu Stainless Steel Bar Sizes and Specifications 2 pages Marko Radovanovic Complex Numbers in Geometry No ratings yet Marko Radovanovic Complex Numbers in Geometry 53 pages Aops Community 2006 China Team Selection Test No ratings yet Aops Community 2006 China Team Selection Test 4 pages 1981 AHSME Solutions No ratings yet 1981 AHSME Solutions 6 pages Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free Download to read ad-free scribd. scribd. scribd. scribd. scribd. scribd. scribd. scribd. scribd. scribd. scribd. scribd. scribd. scribd. scribd. scribd. scribd. scribd.
189381
https://www.britannica.com/science/aromatic-compound
SUBSCRIBE Ask the Chatbot Games & Quizzes History & Society Science & Tech Biographies Animals & Nature Geography & Travel Arts & Culture ProCon Money Videos aromatic compound chemical compound Print Written by Francis A. Carey Associate Professor of Chemistry, University of Virginia, Charlottesville. Author of Organic Chemistry and others. Francis A. Carey Fact-checked by The Editors of Encyclopaedia Britannica Encyclopaedia Britannica's editors oversee subject areas in which they have extensive knowledge, whether from years of experience gained by working on that content or via study for an advanced degree.... The Editors of Encyclopaedia Britannica Article History Key People: : Rudolf Fittig : Wilhelm Körner Related Topics: : nonbenzenoid aromatic compound : aromatic acid : aromatic ring : Hückel rule : benzenoid aromatic compound See all related content aromatic compound, any of a large class of unsaturated chemical compounds characterized by one or more planar rings of atoms joined by covalent bonds of two different kinds. The unique stability of these compounds is referred to as aromaticity. Although the term aromatic originally concerned odour, today its use in chemistry is restricted to compounds that have particular electronic, structural, or chemical properties. Aromaticity results from particular bonding arrangements that cause certain π (pi) electrons within a molecule to be strongly held. Aromaticity is often reflected in smaller than expected heats of combustion and hydrogenation and is associated with low reactivity. Benzene (C6H6) is the best-known aromatic compound and the parent to which numerous other aromatic compounds are related. The six carbons of benzene are joined in a ring, having the planar geometry of a regular hexagon in which all of the C—C bond distances are equal. The six π electrons circulate in a region above and below the plane of the ring, each electron being shared by all six carbons, which maximizes the force of attraction between the nuclei (positive) and the electrons (negative). Equally important is the number of π electrons, which, according to molecular orbital theory, must be equal to 4n + 2, in which n = 1, 2, 3, etc. For benzene with six π electrons, n = 1. The largest group of aromatic compounds are those in which one or more of the hydrogens of benzene are replaced by some other atom or group, as in toluene (C6H5CH3) and benzoic acid (C6H5CO2H). Polycyclic aromatic compounds are assemblies of benzene rings that share a common side—for example, naphthalene (C10H8). Heterocyclic aromatic compounds contain at least one atom other than carbon within the ring. Examples include pyridine (C5H5N), in which one nitrogen (N) replaces one CH group, and purine (C5H4N4), in which two nitrogens replace two CH groups. Heterocyclic aromatic compounds, such as furan (C4H4O), thiophene (C4H4S), and pyrrole (C4H4NH), contain five-membered rings in which oxygen (O), sulfur (S), and NH, respectively, replace an HC=CH unit. More From Britannica chemical compound: Aromatic hydrocarbons (arenes) Francis A. Carey benzene Introduction Discovery of benzene Characteristics of benzene Uses of benzene References & Edit History Related Topics Images Quizzes So Much Chemistry, So Little Time Quiz benzene chemical compound Written by Francis A. Carey Associate Professor of Chemistry, University of Virginia, Charlottesville. Author of Organic Chemistry and others. Francis A. Carey Fact-checked by The Editors of Encyclopaedia Britannica Encyclopaedia Britannica's editors oversee subject areas in which they have extensive knowledge, whether from years of experience gained by working on that content or via study for an advanced degree.... The Editors of Encyclopaedia Britannica Last Updated: • Article History Key People: : Michael Faraday : August Wilhelm von Hofmann Related Topics: : hydrocarbon : arene : Kekulé structure : Dewar benzene : benzene ring On the Web: : Chemistry LibreTexts - Benzene (Aug. 01, 2025) See all related content benzene (C6H6), simplest organic, aromatic hydrocarbon and parent compound of numerous important aromatic compounds. Benzene is a colourless liquid with a characteristic odour and is primarily used in the production of polystyrene. It is highly toxic and is a known carcinogen; exposure to it may cause leukemia. As a result, there are strict controls on benzene emissions. Discovery of benzene Benzene was first discovered by the English scientist Michael Faraday in 1825 in illuminating gas. In 1834 German chemist Eilhardt Mitscherlich heated benzoic acid with lime and produced benzene. In 1845 German chemist A.W. von Hofmann isolated benzene from coal tar. The structure of benzene has been of interest since its discovery. German chemists Joseph Loschmidt (in 1861) and August Kekule von Stradonitz (in 1866) independently proposed a cyclic arrangement of six carbons with alternating single and double bonds. Kekule subsequently modified his structural formula to one in which oscillation of the double bonds gave two equivalent structures in rapid equilibrium. In 1931 American chemist Linus Pauling suggested that benzene had a single structure, which was a resonance hybrid of the two Kekule structures. Britannica Quiz So Much Chemistry, So Little Time Quiz Characteristics of benzene Modern bonding models (valence-bond and molecular orbital theories) explain the structure and stability of benzene in terms of delocalization of six of its electrons, where delocalization in this case refers to the attraction of an electron by all six carbons of the ring instead of just one or two of them. This delocalization causes the electrons to be more strongly held, making benzene more stable and less reactive than expected for an unsaturated hydrocarbon. As a result, the hydrogenation of benzene occurs somewhat more slowly than the hydrogenation of alkenes (other organic compounds that contain carbon-carbon double bonds), and benzene is much more difficult to oxidize than alkenes. Most of the reactions of benzene belong to a class called electrophilic aromatic substitution that leave the ring itself intact but replace one of the attached hydrogens. These reactions are versatile and widely used to prepare derivatives of benzene. Experimental studies, especially those employing X-ray diffraction, show benzene to have a planar structure with each carbon-carbon bond distance equal to 1.40 angstroms (Å). This value is exactly halfway between the C=C distance (1.34 Å) and C—C distance (1.46 Å) of a C=C—C=C unit, suggesting a bond type midway between a double bond and a single bond (all bond angles are 120°). Benzene has a boiling point of 80.1 °C (176.2 °F) and a melting point of 5.5 °C (41.9 °F), and it is freely soluble in organic solvents, but only slightly soluble in water. Uses of benzene At one time, benzene was obtained almost entirely from coal tar; however, since about 1950, these methods have been replaced by petroleum-based processes. More than half of the benzene produced each year is converted to ethylbenzene, then to styrene, and then to polystyrene. The next largest use of benzene is in the preparation of phenol. Other uses include the preparation of aniline (for dyes) and dodecylbenzene (for detergents). Francis A. Carey Feedback Thank you for your feedback Our editors will review what you’ve submitted and determine whether to revise the article. verifiedCite While every effort has been made to follow citation style rules, there may be some discrepancies. Please refer to the appropriate style manual or other sources if you have any questions. Select Citation Style Carey, Francis A.. "aromatic compound". Encyclopedia Britannica, 18 Mar. 2024, Accessed 27 August 2025. Share Share to social media Facebook X External Websites National Center for Biotechnology Information - PubMed Central - Bacterial Degradation of Aromatic Compounds Feedback Thank you for your feedback Our editors will review what you’ve submitted and determine whether to revise the article. print Print Please select which sections you would like to print: verifiedCite While every effort has been made to follow citation style rules, there may be some discrepancies. Please refer to the appropriate style manual or other sources if you have any questions. Select Citation Style Carey, Francis A.. "benzene". Encyclopedia Britannica, 1 Aug. 2025, Accessed 27 August 2025. Share Share to social media Facebook X URL External Websites Centers for Disease Control and Prevention - Benzene American Chemical Society - Molecule of the Week Archive - Benzene CAMEO Chemicals - Benzene World Health Organization - Chemical Safety and Health - Benzene National Center for Biotechnology Information - Benzene Chemistry LibreTexts - Structure and Stability of Benzene University of Bristol - School of Chemistry - Benzene WebMD - What Is Benzene? National Cancer Institute - Benzene Chemistry LibreTexts - Benzene
189382
https://www.wikihow.com/Find-the-Perpendicular-Bisector-of-Two-Points
Home Random Browse Articles Quizzes & Games All QuizzesTrending Love Quizzes Personality Quizzes Fun Games Dating Simulator Learn Something New Forums Courses Happiness Hub Explore More Support wikiHow About wikiHow Log in / Sign up Terms of Use Categories Education and Communications Studying Mathematics Geometry Coordinate Geometry How to Find the Perpendicular Bisector of Two Points Download Article Reviewed by Grace Imson, MA Last Updated: March 10, 2025 Fact Checked Download Article Gathering Information | Calculating the Equation of the Line | Video | Q&A X This article was reviewed by Grace Imson, MA. Grace Imson is a math teacher with over 40 years of teaching experience. Grace is currently a math instructor at the City College of San Francisco and was previously in the Math Department at Saint Louis University. She has taught math at the elementary, middle, high school, and college levels. She has an MA in Education, specializing in Administration and Supervision from Saint Louis University. This article has been fact-checked, ensuring the accuracy of any cited facts and confirming the authority of its sources. This article has been viewed 1,124,242 times. A perpendicular bisector is a line that cuts a line segment connecting two points exactly in half at a 90 degree angle. To find the perpendicular bisector of two points, all you need to do is find their midpoint and negative reciprocal, and plug these answers into the equation for a line in slope-intercept form. If you want to know how to find the perpendicular bisector of two points, just follow these steps. Steps Method 1 Method 1 of 2: Gathering Information Download Article 1 Find the midpoint of the two points. To find the midpoint of two points, simply plug them into the midpoint formula: [(x1 + x2)/2,( y1 + y2)/2]. This means that you're just finding the average of the x and y coordinates of the two sets of points, which leads you to the midpoint of the two coordinates. Let's say we're working with the (x1, y1) coordinates of (2, 5) and the (x2, y2) coordinates of (8, 3). Here's how you find the midpoint for those two points: X Research source [(2+8)/2, (5 +3)/2] = (10/2, 8/2) = (5, 4) The coordinates of the midpoint of (2, 5) and (8, 3) are (5, 4). 2. 2 Find the slope of the two points. To find the slope of the two points, simply plug the points into the slope formula: (y2 - y1) / (x2 - x1). The slope of a line measures the distance of its vertical change over the distance of its horizontal change. Here's how to find the slope of the line that goes through the points (2, 5) and (8, 3): X Research source (3-5)/(8-2) = -2/6 = -1/3 + The slope of the line is -1/3. To find this slope, you have to reduce 2/6 to its lowest terms, 1/3, since both 2 and 6 are evenly divisible by 2. Advertisement 3. 3 Find the negative reciprocal of the slope of the two points. To find the negative reciprocal of a slope, simply take the reciprocal of the slope and change the sign. You can take the negative reciprocal of a number simply by flipping the x and y coordinates and changing the sign. The normal reciprocal of 1/2 would be 2/1 and the negative reciprocal is -2/1, or just -2; the reciprocal of -4 is 1/4. X Research source The negative reciprocal of -1/3 is 3 because 3/1 is the reciprocal of 1/3 and the sign has been changed from negative to positive. Advertisement Method 2 Method 2 of 2: Calculating the Equation of the Line Download Article 1 Write the equation of a line in slope-intercept form. The equation of a line in slope-intercept form is y = mx + b where any x and y coordinates in the line are represented by the "x" and "y," the "m" represents the slope of the line, and the "b" represents the y-intercept of the line. The y-intercept is where the line intersects the y-axis. Once you write down this equation, you can begin to find the equation of the perpendicular bisector of the two points. X Research source 2. 2 Plug the negative reciprocal of the original slope into the equation. The negative reciprocal of the slope of the points (2, 5) and (8, 3) was 3. The "m" in the equation represents the slope, so plug the 3 into the "m" in the equation of y = mx + b. X Research source 3 --> y = mx + b = y = 3x + b 3. 3 Plug the points of the midpoint into the line. You already know that the midpoint of the points (2, 5) and (8, 3) is (5, 4). Since the perpendicular bisector runs through the midpoint of the two lines, you can plug the coordinates of the midpoint into the equation of the line. Simply plug in (5, 4) into the x and y coordinates of the line. X Research source (5, 4) ---> y = 3x + b = 4 = 3(5) + b = 4 = 15 + b 4. 4 Solve for the intercept. You have found three of the four variables in the equation of the line. Now you have enough information to solve for the remaining variable, "b," which is the y-intercept of this line. Simply isolate the variable "b" to find its value. Just subtract 15 from both sides of the equation. X Research source 4 = 15 + b = -11 = b b = -11 5. 5 Write the equation of the perpendicular bisector. To write the equation of the perpendicular bisector, you simply have to plug in the slope of the line (3) and the y-intercept (-11) into the equation of a line in slope-intercept form. You should not plug in any terms into the x and y coordinates, because this equation will allow you to find any coordinate on the line by plugging in either any x or any y coordinate. X Research source y = mx + b y = 3x - 11 The equation for the perpendicular bisector of the points (2, 5) and (8, 3) is y = 3x - 11. Advertisement Calculator, Practice Problems, and Answers Sample Find the Perpendicular Bisector of Two Points Calculator Sample Find the Perpendicular Bisector of Two Points Practice Problems Sample Find the Perpendicular Bisector of Two Points Practice Answers Community Q&A Search Add New Question Question What if one of the numbers is a negative, and when I try to find the midpoint it gives me 0? The points are A(-4,4) and B(4,8). Donagan Top Answerer Zero is correct. The x-value midway between -4 and +4 is 0. The y-value midway between 8 and 4 is 6. So the midpoint is (0,6). Thanks! We're glad this was helpful. Thank you for your feedback. If wikiHow has helped you, please consider a small contribution to support us in helping more readers like you. We’re committed to providing the world with free how-to resources, and even $1 helps us in our mission. Support wikiHow Yes No Not Helpful 14 Helpful 55 Question What is the exact length of a line joining the points (-12,3) and (8,4)? Donagan Top Answerer Visualize a right triangle, the hypotenuse of which is the line joining the two points. One leg of the triangle has this length: 8 - (-12) = 8 + 12 = 20. The length of the other leg is 4 - 3 = 1. The hypotenuse has this length: √(20² + 1²) = √401. Thus, the distance between the points is 20.025. Thanks! We're glad this was helpful. Thank you for your feedback. If wikiHow has helped you, please consider a small contribution to support us in helping more readers like you. We’re committed to providing the world with free how-to resources, and even $1 helps us in our mission. Support wikiHow Yes No Not Helpful 13 Helpful 30 Question Is a bisector the same as the negative reciprocal of the line? Donagan Top Answerer No. A line's negative reciprocal would be another line perpendicular to it but not necessarily bisecting it. Thanks! We're glad this was helpful. Thank you for your feedback. If wikiHow has helped you, please consider a small contribution to support us in helping more readers like you. We’re committed to providing the world with free how-to resources, and even $1 helps us in our mission. Support wikiHow Yes No Not Helpful 12 Helpful 21 See more answers Ask a Question 200 characters left Include your email address to get a message when this question is answered. Submit Advertisement Video Tips Submit a Tip All tip submissions are carefully reviewed before being published Name Please provide your name and last initial Submit Thanks for submitting a tip for review! You Might Also Like How to Find the Equation of a LineHow to Construct a Perpendicular Line to a Given Line Through Point on the Line How to Construct a Bisector of a Given AngleHow to Calculate the Slope of a LineHow to Find the Length of the Hypotenuse2 Simple Ways to Calculate the Angle Between Two VectorsHow to Graph an EquationHow to Graph a ParabolaHow to Find the Distance Between Two PointsHow to Find the Equations of the Asymptotes of a HyperbolaHow to Find the Equation of a Perpendicular Line Given an Equation and PointHow to Find the Equation of a Perpendicular LineHow to Use the Pythagorean TheoremHow to Find Vertical Asymptotes of a Rational Function Advertisement References ↑ ↑ ↑ ↑ ↑ ↑ ↑ ↑ About This Article Reviewed by: Grace Imson, MA Math Teacher This article was reviewed by Grace Imson, MA. Grace Imson is a math teacher with over 40 years of teaching experience. Grace is currently a math instructor at the City College of San Francisco and was previously in the Math Department at Saint Louis University. She has taught math at the elementary, middle, high school, and college levels. She has an MA in Education, specializing in Administration and Supervision from Saint Louis University. This article has been viewed 1,124,242 times. 19 votes - 87% Co-authors: 31 Updated: March 10, 2025 Views: 1,124,242 Categories: Coordinate Geometry Article SummaryX To find the perpendicular bisector of 2 points, find the midpoint of the 2 points by using the midpoint formula. Then, find the slope of the 2 points by using the slope formula, and find the negative reciprocal of the slope by taking the reciprocal and changing the sign. Write the equation of the line in point-slope form using the negative reciprocal and the midpoint. Solve the equation for the intercept to find the perpendicular bisector. For more information, including the formulas for finding midpoint and slope, scroll down! Did this summary help you? In other languages Italian Spanish German Russian French Dutch Arabic Chinese Thai Japanese Hindi Turkish Korean Print Send fan mail to authors Thanks to all authors for creating a page that has been read 1,124,242 times. Reader Success Stories Anonymous Oct 23, 2016 "Great explanation! My teacher didn't explain it quite like this. Thank you so much, I appreciated this greatly."..." more More reader stories Hide reader stories Share your story Did this article help you? Advertisement Cookies make wikiHow better. By continuing to use our site, you agree to our cookie policy. Reviewed by: Grace Imson, MA Math Teacher Co-authors: 31 Updated: March 10, 2025 Views: 1,124,242 87% of readers found this article helpful. 19 votes - 87% Click a star to add your vote % of people told us that this article helped them. Anonymous Oct 23, 2016 "Great explanation! My teacher didn't explain it quite like this. Thank you so much, I appreciated this greatly."..." more Anonymous Aug 20, 2017 "Thanks a lot, wikiHow! Now I know where to come first if I'm looking to do something I don't know how to do!"..." more Khadija Kolsawala Jun 5, 2017 "This helped me in solving my homework questions. Saved a lot time in searching here and there. Thank you." Anna Kh May 3, 2018 "Everything helped, basically. It was really easy to understand and memorize. Thank you! :) " Anonymous Nov 12, 2017 "It was very clear and simple, with illustrations of the equations written out. I love it!" Share yours! More success stories Hide success stories Quizzes Do I Have a Dirty Mind Quiz Take QuizPersonality Analyzer: How Deep Am I? Take QuizAm I a Good Kisser Quiz Take QuizRizz Game: Test Your Rizz Take Quiz"Hear Me Out" Character Analyzer Take QuizWhat's Your Red Flag Quiz Take Quiz You Might Also Like How to Find the Equation of a LineHow to Construct a Perpendicular Line to a Given Line Through Point on the LineHow to Construct a Bisector of a Given AngleHow to Calculate the Slope of a Line Featured Articles How to Undo a Repost on InstagramHow Do You Know if She Likes You? 15+ Signs She’s Into YouHow to FlirtHow to Tell if a Guy Has a Crush on YouHow to Write a Diary17 Physical Signs Someone Is in Love with You Trending Articles How Many People Have a Crush On Me QuizWhat Does Your Sleeping Position Say About You?Which Huntr/x Member is My Soulmate QuizWhat Is My Intelligence Type QuizLabubu Blind Box Generator: Unbox & Collect Your Own Online LababusCube Personality Test Featured Articles 12 Signs a Hug is Definitely Romantic, According to ExpertsSigns He Wants You to Be His Girlfriend SoonHow to Tell if You’re in LoveWhat Haircut Should I Get QuizHow to Tell a Girl You Like Her Without Getting RejectedThe Important Difference Between "I Love You" and "Love You" Featured Articles How to Draw a Person's Face13 Signs Your Friends with Benefits Is Falling For YouHow to Talk to a Girl You Like for the First TimeHow to Text Message Someone You Like Watch Articles How to Make Homemade Liquid Dish SoapHow to Use Coconut Oil for Your Hair5 Ways to Fold Shorts like a Pro So You Stay OrganizedHow to Get Rid of Flies Outdoors (And Keep Them Away for Good)How to Clean Shoe InsolesHow to Soften Cuticles Trending Articles The 15 Most (And Least) Attractive Hobbies for Men to HaveAm I an Alpha, Beta, or Omega QuizHollow Knight: Best Order to Defeat All Bosses“Black Cat Boyfriend” Dating Theory ExplainedWhat Is the Banana Bread Theory? The Viral TikTok Belief, ExplainedWhat Does 41 Mean on TikTok? The Trend Explained Quizzes Am I Smart Quiz Take QuizHow Insecure Am I Quiz Take QuizWhat Disney Princess Am I Quiz Take QuizDo I Have a Phobia Quiz Take QuizGuess My Age Quiz Take QuizAm I a Genius Quiz Take Quiz wikiHow Tech Help Pro: Develop the tech skills you need for work and life Let's do this! X -
189383
https://www.facebook.com/photo.php?fbid=901043338728159&id=100064675034728&set=a.668121308687031
Facebook Facebook Email or phone Password Forgot account? Create new account You’re Temporarily Blocked You’re Temporarily Blocked It looks like you were misusing this feature by going too fast. You’ve been temporarily blocked from using it. Back English (US) Español Français (France) 中文(简体) العربية Português (Brasil) Italiano 한국어 Deutsch हिन्दी 日本語 Sign Up Log In Messenger Facebook Lite Video Meta Pay Meta Store Meta Quest Ray-Ban Meta Meta AI Meta AI more content Instagram Threads Voting Information Center Privacy Policy Privacy Center About Create ad Create Page Developers Careers Cookies Ad choices Terms Help Contact Uploading & Non-Users Settings Activity log Meta © 2025
189384
http://ha.cma.gov.cn/qxkp/kpbd/201809/t20180925_38038.html
河南省气象局 首页部门概况 领导简介部门简介部门职责内设机构直属单位地市气象局 新闻资讯 要闻播报工作动态基层台站媒体聚焦图片新闻气象专题 政务公开 通知公告文件公开人事信息公开年报规划计划政策法规突发事件应对 办事服务 行政审批局长信箱网上调查投诉举报公众留言 气象服务 天气预报天气实况气象预警气象科普 地市气象 当前位置:首页->气象科普->科普宝典 教你看懂天气预报 来源: 发布时间:2018-09-25 17:12:00 【字号:大 中 小】 分享到: 对和我们密切相关的气象知识,你知道多少?天气预报看了这么多年,黄淮地区、江淮地区、江南地区这些气象地理区划你能正确识别吗?晴转多云、晴到多云又有啥不同? 天气与气候 到底有啥区别? 要想知道天气与气候的区别,看看这两组对话就知道了: “今天的天气很好啊,不冷不热,还有点微风,很适合出去游玩。” 这里说的就是天气,指的是短时天气现象。 “你老家在哪里啊?”“云南昆明,那里夏无酷暑,冬无严寒,四季如春。” 这里说的就是气候,指的是一种平均状态,主要反映一个地区的冷暖干湿等基本特征。 气象上的“凌晨” 你知道是几点吗? “白天不懂夜的黑”,读懂天气预报,要先分清时间用 语对应的时间段。 白天:8时至20时; 凌晨:3时-5时; 早晨:5时-8时; 上午:8时-11时; 中午:11时-13时; 下午:13时-17时; 傍晚:17时-20时; 夜间:当日20时-次日8时; 上半夜:20时-24时; 下半夜:次日0时-5时。 每天听说温度,你知道温度的具体用法吗 今天最高温度:指的是白天出现的最高气温,受太阳辐射的影响,最高气温一般出现在14时左右。 明晨最低温度:指的是第二天早晨出现的最低气温,一般出现在清晨6时左右。 明天最低气温:受冷空气等的影响,有时最低气温不是出现在早晨,可能出现在白天,因此,有时天气预报中会出现“明天最低气温”这个用语。 晴转多云/晴到多云/晴间多云,有啥不同? 天空状况用语一般分为晴天、少云、多云、阴天等。 晴转多云着重于转变的过程,指天气有可能往坏的方向转变,比如前半天是晴天,后面转为多云,意味着天气要变化; 晴到多云则说明天气状况相对稳定,始终在“晴”与“多云”这个区间内; 晴间多云侧重于大部分时间是晴天,少部分时间云量增多,不过主要还是晴天。 如何区分“暴雪”“大暴雪” “短时强降水”到底有多强? 如前文所述,划分降雨(雪)量用的是12或24小时内降雨(雪)量的总和。实际情况下,降雨(雪)很少毫不间断。这样说来,以12或24小时降水量来判断降水等级不就缺乏科学性了吗? 考虑到这一问题,气象部门引入了“阵性降水”和“短时强降水”的概念。阵性降水意为降水时间比较短暂(一阵)、开始与终止时间比较突然的降水,分为阵雨和阵雪。阵性降水一般不论等级。 当阵性降水强度较强,1小时降水量超过20毫米时,就称之为短时强降水。短时强降水属于强对流天气的范畴。 为何天气预报有时不准确? 因为受制于三大因素:①观测资料不足:一些地区受地形限制,观测点很少,如海洋、高原地区。②数值计算能力:世界普遍采用建立大气运动方程组、通过“解方程”的方式得出未来的大气状况,计算能力高低会影响准确率。③仪器精确度。 电视天气预报为何“话语匆匆”? 电视主持人的正常语速约每分钟250字,电视台提供给每档节目的时长是固定的。以《新闻联播》后的天气预报为例:主持人必须在70秒之内播报全国未来三天的天气趋势,而在有特殊天气情况时又必须加大信息量,所以语速可达到每分钟400至500字。 【关闭本页】 【打印本页】 主办 : 河南省气象局办公室 承办 : 河南省气象服务中心 地址 : 郑州市金水路110号 邮编 :450003 河南省气象局办公室: 0371-65922900 网站值班电话 : 0371 - 65922902 ( 工作日8: 00-17 : 00) 网站标识码:bm54160001 备案编号:京ICP备05004897郑公备:41010502000026
189385
https://biblehub.com/topical/i/immutable.htm
| | | | | | | --- | | | | | | | | | | | | | | | | | | --- --- --- | | | | | --- | | Bible > Topical > Immutable | | | | | | | | ◄ Immutable ► | | | | Jump to: ISBE • Webster's • Concordance • Thesaurus • Greek • Library • Subtopics • Terms Topical Encyclopedia The term "immutable" refers to the unchanging and unchangeable nature of God. In Christian theology, immutability is a key attribute of God, signifying that He is constant and unalterable in His essence, character, will, and promises. This concept is foundational to understanding God's reliability and faithfulness, as it assures believers that God's nature and His promises remain steadfast throughout time. Biblical Foundation The immutability of God is explicitly stated in several passages of Scripture. In Malachi 3:6, God declares, "For I, the LORD, do not change; therefore you, O sons of Jacob, are not consumed." This verse underscores the idea that God's unchanging nature is a source of stability and assurance for His people. Similarly, in the New Testament, James 1:17 affirms, "Every good and perfect gift is from above, coming down from the Father of the heavenly lights, with whom there is no change or shifting shadow." This passage highlights that God's goodness and generosity are consistent, unaffected by the fluctuations that characterize the created order. Hebrews 13:8 further emphasizes this attribute by stating, "Jesus Christ is the same yesterday and today and forever." This verse not only affirms the immutability of Christ but also connects it to the divine nature shared with the Father, reinforcing the consistency of God's character across both Testaments. Theological Implications The immutability of God has profound theological implications. It assures believers that God's promises are reliable and His purposes are unalterable. Numbers 23:19 declares, "God is not a man, that He should lie, nor a son of man, that He should change His mind. Does He speak and not act? Does He promise and not fulfill?" This assurance is crucial for faith, as it means that God's covenantal commitments, such as those made to Abraham, David, and through Christ, are eternally secure. Moreover, God's immutability is closely linked to His omniscience and omnipotence. Because God is all-knowing, He does not need to change His mind or plans in response to unforeseen circumstances. His omnipotence ensures that He has the power to accomplish His unchanging will. Philosophical Considerations Philosophically, the immutability of God distinguishes Him from the mutable nature of creation. While the universe and everything within it are subject to change, decay, and transformation, God remains constant. This distinction underscores the Creator-creature divide and highlights God's transcendence. Pastoral Significance For believers, the immutability of God provides comfort and hope. In a world marked by change and uncertainty, the unchanging nature of God offers a firm foundation. Believers can trust that God's love, mercy, and justice remain constant, providing assurance in times of trial and change. In summary, the immutability of God is a central tenet of Christian theology, affirming that God is unchanging in His essence, character, and promises. This attribute assures believers of the reliability and faithfulness of God, providing a foundation for trust and hope in His eternal purposes. Webster's Revised Unabridged Dictionary (a.) Not mutable; unchangeable; unalterable. International Standard Bible Encyclopedia IMMUTABILITY; IMMUTABLE i-mu-ta-bil'-i-ti, i-mu'-ta-b'-l (ametathetos): Occurs in Hebrews 6:17, 18 of the unchangeableness of the Divine counsel. It is the perfection of Yahweh that He changes not in character, will, purpose, aim (Malachi 3:6; so of Christ, Hebrews 13:8). See FAITHFULNESS; UNCHANGEABLE. Greek 276. ametathetos -- immutable, unchangeable ... immutable, unchangeable. Part of Speech: Adjective Transliteration: ametathetos Phonetic Spelling: (am-et-ath'-et-os) Short Definition: unchanged, unchangeable ... //strongsnumbers.com/greek2/276.htm - 7k 1982. episkiazo -- to overshadow ... 1982 ("overshadow") is used in the NT of -- which always brings His -plan to pass (see 1012 , "God's immutable will for "). Word ... //strongsnumbers.com/greek2/1982.htm - 7k 1012. boule -- counsel ... This level of God's (1012 ) demonstrates He is of history, ie always in charge! [1012 () is more than God's immutable plan of physical circumstances. ... //strongsnumbers.com/greek2/1012.htm - 7k 4819. sumbaino -- to come together, ie (of events) to come to pass ... should" (G. Archer). [That is, on the "-level of the will of God. See 1012 ("the Lord's immutable plan for ").]. Word Origin from ... //strongsnumbers.com/greek2/4819.htm - 8k Library Dialogue i. --The Immutable. ... Book V. Dialogue I."The Immutable. Orthodoxos and Eranistes. Orth."Better were it for us to agree and abide by the apostolic doctrine in its purity. ... /.../theodoret/the ecclesiastical history of theodoret/dialogue i the immutable.htm Whether Truth is Immutable? ... OF TRUTH (EIGHT ARTICLES) Whether truth is immutable? Objection 1: It seems that truth is immutable. For Augustine says (De Lib. ... Therefore truth is immutable. ... /...//christianbookshelf.org/aquinas/summa theologica/whether truth is immutable.htm A Willing People and an Immutable Leader ... A Willing People and an Immutable Leader. A Sermon (No.74). Delivered on Sabbath Morning, April 13, 1856, by the. REV. CH SPURGEON. ... /.../spurgeon/spurgeons sermons volume 2 1856/a willing people and an.htm Whether God is Altogether Immutable? ... THE IMMUTABILITY OF GOD (TWO ARTICLES) Whether God is altogether immutable? Objection 1: It seems that God is not altogether immutable. ... /.../aquinas/summa theologica/whether god is altogether immutable.htm Whether to be Immutable Belongs to God Alone? ... THE IMMUTABILITY OF GOD (TWO ARTICLES) Whether to be immutable belongs to God alone? Objection 1: It seems that to be immutable does not belong to God alone. ... /.../aquinas/summa theologica/whether to be immutable belongs.htm God's Law Immutable. ... 25. GOD'S LAW IMMUTABLE. [Illustration: Chapter header.] "The temple of God was opened in heaven, and there was seen in His temple ... /.../white/the great controversy between christ and satan /25 gods law immutable.htm God's Law Immutable ... Chapter 25 God's Law Immutable. The temple of God was opened in heaven, and there was seen in His temple the ark of His testament." Revelation 11:19. ... /.../white/the great controversy/chapter 25 gods law immutable.htm Demonstrations by Syllogisms. That God the Word is Immutable. ... That God the Word is Immutable. 1. We have confessed one substance of the Father, of the Son, and of the Holy Ghost, and have agreed that it is immutable. ... /.../demonstrations by syllogisms that god.htm Love, Goodness, and Communication of Good, is the Immutable Glory ... ... Address 191: Love, goodness, and communication of good, is the immutable glory and perfection of the divine? Love, goodness, and ... /.../address 191 0 0 love goodness and.htm He Then Declares that the Close Relation Between Names and Things ... ... Section 2. He then declares that the close relation between names and things is immutable, and thereafter proceeds accordingly, in the most excellent manner ... /.../gregory/gregory of nyssa dogmatic treatises etc/section 2 he then declares.htm Thesaurus Immutable (1 Occurrence) ... (a.) Not mutable; unchangeable; unalterable. Int. Standard Bible Encyclopedia. IMMUTABILITY; IMMUTABLE. ... Multi-Version Concordance Immutable (1 Occurrence). ... /i/immutable.htm - 7k Immutability (1 Occurrence) ... Noah Webster's Dictionary (n.) The state or quality of being immutable; immutableness. Int. Standard Bible Encyclopedia. IMMUTABILITY; IMMUTABLE. ... /i/immutability.htm - 7k Unchangeableness (1 Occurrence) ... is unchangeable in His love and grace and power to save, but that is only because it is the love and grace and power of the absolute, infinite and immutable God ... /u/unchangeableness.htm - 27k Unchangeable (4 Occurrences) ... is unchangeable in His love and grace and power to save, but that is only because it is the love and grace and power of the absolute, infinite and immutable God ... /u/unchangeable.htm - 28k Imna (1 Occurrence) /i/imna.htm - 6k Fate (138 Occurrences) ... 1. (n.) A fixed decree by which the order of things is prescribed; the immutable law of the universe; inevitable necessity; the force by which all existence is ... /f/fate.htm - 37k Fled (181 Occurrences) ... Hebrews 6:18 that by two immutable things, in which it is impossible for God to lie, we may have a strong encouragement, who have fled for refuge to take hold ... /f/fled.htm - 38k Eternal (166 Occurrences) ... 3. (a.) Continued without intermission; perpetual; ceaseless; constant. 4. (a.) Existing at all times without change; immutable. ... /e/eternal.htm - 46k Predestination ... They teach that the eternal, sovereign, immutable, and unconditional decree or "determinate purpose" of God governs all events. ... /p/predestination.htm - 26k Brass (168 Occurrences) ... Dan. 2:39). The "mountains of brass" Zechariah (6:1) speaks of have been supposed to represent the immutable decrees of God. The ... /b/brass.htm - 44k Resources What is the immutability of God? | GotQuestions.org What is finite godism? | GotQuestions.org Does God have emotions? | GotQuestions.org Immutable: Dictionary and Thesaurus | Clyx.com Bible Concordance • Bible Dictionary • Bible Encyclopedia • Topical Bible • Bible Thesuarus Concordance Immutable (1 Occurrence) Hebrews 6:18 that by two immutable things, in which it is impossible for God to lie, we may have a strong encouragement, who have fled for refuge to take hold of the hope set before us. (WEB KJV ASV WBS YLT) Subtopics Immutable Related Terms Zophah (2 Occurrences) Jimnites (1 Occurrence) Imnah (4 Occurrences) Immutable (1 Occurrence) Amal (1 Occurrence) Shelesh (1 Occurrence) Helem (2 Occurrences) Hotham (3 Occurrences) | Immutability of Christ Imna Top of Page Top of Page | | | | | | | | --- | | Bible > Topical > Immutable | | | | | | | | | | | | ◄ Immutable ► | | | | Jump to: ISBE • Webster's • Concordance • Thesaurus • Greek • Library • Subtopics • Terms Topical Encyclopedia The term "immutable" refers to the unchanging and unchangeable nature of God. In Christian theology, immutability is a key attribute of God, signifying that He is constant and unalterable in His essence, character, will, and promises. This concept is foundational to understanding God's reliability and faithfulness, as it assures believers that God's nature and His promises remain steadfast throughout time. Biblical Foundation The immutability of God is explicitly stated in several passages of Scripture. In Malachi 3:6, God declares, "For I, the LORD, do not change; therefore you, O sons of Jacob, are not consumed." This verse underscores the idea that God's unchanging nature is a source of stability and assurance for His people. Similarly, in the New Testament, James 1:17 affirms, "Every good and perfect gift is from above, coming down from the Father of the heavenly lights, with whom there is no change or shifting shadow." This passage highlights that God's goodness and generosity are consistent, unaffected by the fluctuations that characterize the created order. Hebrews 13:8 further emphasizes this attribute by stating, "Jesus Christ is the same yesterday and today and forever." This verse not only affirms the immutability of Christ but also connects it to the divine nature shared with the Father, reinforcing the consistency of God's character across both Testaments. Theological Implications The immutability of God has profound theological implications. It assures believers that God's promises are reliable and His purposes are unalterable. Numbers 23:19 declares, "God is not a man, that He should lie, nor a son of man, that He should change His mind. Does He speak and not act? Does He promise and not fulfill?" This assurance is crucial for faith, as it means that God's covenantal commitments, such as those made to Abraham, David, and through Christ, are eternally secure. Moreover, God's immutability is closely linked to His omniscience and omnipotence. Because God is all-knowing, He does not need to change His mind or plans in response to unforeseen circumstances. His omnipotence ensures that He has the power to accomplish His unchanging will. Philosophical Considerations Philosophically, the immutability of God distinguishes Him from the mutable nature of creation. While the universe and everything within it are subject to change, decay, and transformation, God remains constant. This distinction underscores the Creator-creature divide and highlights God's transcendence. Pastoral Significance For believers, the immutability of God provides comfort and hope. In a world marked by change and uncertainty, the unchanging nature of God offers a firm foundation. Believers can trust that God's love, mercy, and justice remain constant, providing assurance in times of trial and change. In summary, the immutability of God is a central tenet of Christian theology, affirming that God is unchanging in His essence, character, and promises. This attribute assures believers of the reliability and faithfulness of God, providing a foundation for trust and hope in His eternal purposes. Webster's Revised Unabridged Dictionary (a.) Not mutable; unchangeable; unalterable. International Standard Bible Encyclopedia IMMUTABILITY; IMMUTABLE i-mu-ta-bil'-i-ti, i-mu'-ta-b'-l (ametathetos): Occurs in Hebrews 6:17, 18 of the unchangeableness of the Divine counsel. It is the perfection of Yahweh that He changes not in character, will, purpose, aim (Malachi 3:6; so of Christ, Hebrews 13:8). See FAITHFULNESS; UNCHANGEABLE. Greek 276. ametathetos -- immutable, unchangeable ... immutable, unchangeable. Part of Speech: Adjective Transliteration: ametathetos Phonetic Spelling: (am-et-ath'-et-os) Short Definition: unchanged, unchangeable ... //strongsnumbers.com/greek2/276.htm - 7k 1982. episkiazo -- to overshadow ... 1982 ("overshadow") is used in the NT of -- which always brings His -plan to pass (see 1012 , "God's immutable will for "). Word ... //strongsnumbers.com/greek2/1982.htm - 7k 1012. boule -- counsel ... This level of God's (1012 ) demonstrates He is of history, ie always in charge! [1012 () is more than God's immutable plan of physical circumstances. ... //strongsnumbers.com/greek2/1012.htm - 7k 4819. sumbaino -- to come together, ie (of events) to come to pass ... should" (G. Archer). [That is, on the "-level of the will of God. See 1012 ("the Lord's immutable plan for ").]. Word Origin from ... //strongsnumbers.com/greek2/4819.htm - 8k Library Dialogue i. --The Immutable. ... Book V. Dialogue I."The Immutable. Orthodoxos and Eranistes. Orth."Better were it for us to agree and abide by the apostolic doctrine in its purity. ... /.../theodoret/the ecclesiastical history of theodoret/dialogue i the immutable.htm Whether Truth is Immutable? ... OF TRUTH (EIGHT ARTICLES) Whether truth is immutable? Objection 1: It seems that truth is immutable. For Augustine says (De Lib. ... Therefore truth is immutable. ... /...//christianbookshelf.org/aquinas/summa theologica/whether truth is immutable.htm A Willing People and an Immutable Leader ... A Willing People and an Immutable Leader. A Sermon (No.74). Delivered on Sabbath Morning, April 13, 1856, by the. REV. CH SPURGEON. ... /.../spurgeon/spurgeons sermons volume 2 1856/a willing people and an.htm Whether God is Altogether Immutable? ... THE IMMUTABILITY OF GOD (TWO ARTICLES) Whether God is altogether immutable? Objection 1: It seems that God is not altogether immutable. ... /.../aquinas/summa theologica/whether god is altogether immutable.htm Whether to be Immutable Belongs to God Alone? ... THE IMMUTABILITY OF GOD (TWO ARTICLES) Whether to be immutable belongs to God alone? Objection 1: It seems that to be immutable does not belong to God alone. ... /.../aquinas/summa theologica/whether to be immutable belongs.htm God's Law Immutable. ... 25. GOD'S LAW IMMUTABLE. [Illustration: Chapter header.] "The temple of God was opened in heaven, and there was seen in His temple ... /.../white/the great controversy between christ and satan /25 gods law immutable.htm God's Law Immutable ... Chapter 25 God's Law Immutable. The temple of God was opened in heaven, and there was seen in His temple the ark of His testament." Revelation 11:19. ... /.../white/the great controversy/chapter 25 gods law immutable.htm Demonstrations by Syllogisms. That God the Word is Immutable. ... That God the Word is Immutable. 1. We have confessed one substance of the Father, of the Son, and of the Holy Ghost, and have agreed that it is immutable. ... /.../demonstrations by syllogisms that god.htm Love, Goodness, and Communication of Good, is the Immutable Glory ... ... Address 191: Love, goodness, and communication of good, is the immutable glory and perfection of the divine? Love, goodness, and ... /.../address 191 0 0 love goodness and.htm He Then Declares that the Close Relation Between Names and Things ... ... Section 2. He then declares that the close relation between names and things is immutable, and thereafter proceeds accordingly, in the most excellent manner ... /.../gregory/gregory of nyssa dogmatic treatises etc/section 2 he then declares.htm Thesaurus Immutable (1 Occurrence) ... (a.) Not mutable; unchangeable; unalterable. Int. Standard Bible Encyclopedia. IMMUTABILITY; IMMUTABLE. ... Multi-Version Concordance Immutable (1 Occurrence). ... /i/immutable.htm - 7k Immutability (1 Occurrence) ... Noah Webster's Dictionary (n.) The state or quality of being immutable; immutableness. Int. Standard Bible Encyclopedia. IMMUTABILITY; IMMUTABLE. ... /i/immutability.htm - 7k Unchangeableness (1 Occurrence) ... is unchangeable in His love and grace and power to save, but that is only because it is the love and grace and power of the absolute, infinite and immutable God ... /u/unchangeableness.htm - 27k Unchangeable (4 Occurrences) ... is unchangeable in His love and grace and power to save, but that is only because it is the love and grace and power of the absolute, infinite and immutable God ... /u/unchangeable.htm - 28k Imna (1 Occurrence) /i/imna.htm - 6k Fate (138 Occurrences) ... 1. (n.) A fixed decree by which the order of things is prescribed; the immutable law of the universe; inevitable necessity; the force by which all existence is ... /f/fate.htm - 37k Fled (181 Occurrences) ... Hebrews 6:18 that by two immutable things, in which it is impossible for God to lie, we may have a strong encouragement, who have fled for refuge to take hold ... /f/fled.htm - 38k Eternal (166 Occurrences) ... 3. (a.) Continued without intermission; perpetual; ceaseless; constant. 4. (a.) Existing at all times without change; immutable. ... /e/eternal.htm - 46k Predestination ... They teach that the eternal, sovereign, immutable, and unconditional decree or "determinate purpose" of God governs all events. ... /p/predestination.htm - 26k Brass (168 Occurrences) ... Dan. 2:39). The "mountains of brass" Zechariah (6:1) speaks of have been supposed to represent the immutable decrees of God. The ... /b/brass.htm - 44k Resources What is the immutability of God? | GotQuestions.org What is finite godism? | GotQuestions.org Does God have emotions? | GotQuestions.org Immutable: Dictionary and Thesaurus | Clyx.com Bible Concordance • Bible Dictionary • Bible Encyclopedia • Topical Bible • Bible Thesuarus Concordance Immutable (1 Occurrence) Hebrews 6:18 that by two immutable things, in which it is impossible for God to lie, we may have a strong encouragement, who have fled for refuge to take hold of the hope set before us. (WEB KJV ASV WBS YLT) Subtopics Immutable Related Terms Zophah (2 Occurrences) Jimnites (1 Occurrence) Imnah (4 Occurrences) Immutable (1 Occurrence) Amal (1 Occurrence) Shelesh (1 Occurrence) Helem (2 Occurrences) Hotham (3 Occurrences) | i-mu-ta-bil'-i-ti, i-mu'-ta-b'-l (ametathetos): Occurs in Hebrews 6:17, 18 of the unchangeableness of the Divine counsel. It is the perfection of Yahweh that He changes not in character, will, purpose, aim (Malachi 3:6; so of Christ, Hebrews 13:8). See FAITHFULNESS; UNCHANGEABLE. Greek 276. ametathetos -- immutable, unchangeable ... immutable, unchangeable. Part of Speech: Adjective Transliteration: ametathetos Phonetic Spelling: (am-et-ath'-et-os) Short Definition: unchanged, unchangeable ... //strongsnumbers.com/greek2/276.htm - 7k 1982. episkiazo -- to overshadow ... 1982 ("overshadow") is used in the NT of -- which always brings His -plan to pass (see 1012 , "God's immutable will for "). Word ... //strongsnumbers.com/greek2/1982.htm - 7k 1012. boule -- counsel ... This level of God's (1012 ) demonstrates He is of history, ie always in charge! [1012 () is more than God's immutable plan of physical circumstances. ... //strongsnumbers.com/greek2/1012.htm - 7k 4819. sumbaino -- to come together, ie (of events) to come to pass ... should" (G. Archer). [That is, on the "-level of the will of God. See 1012 ("the Lord's immutable plan for ").]. Word Origin from ... //strongsnumbers.com/greek2/4819.htm - 8k Library Dialogue i. --The Immutable. ... Book V. Dialogue I."The Immutable. Orthodoxos and Eranistes. Orth."Better were it for us to agree and abide by the apostolic doctrine in its purity. ... /.../theodoret/the ecclesiastical history of theodoret/dialogue i the immutable.htm Whether Truth is Immutable? ... OF TRUTH (EIGHT ARTICLES) Whether truth is immutable? Objection 1: It seems that truth is immutable. For Augustine says (De Lib. ... Therefore truth is immutable. ... /...//christianbookshelf.org/aquinas/summa theologica/whether truth is immutable.htm A Willing People and an Immutable Leader ... A Willing People and an Immutable Leader. A Sermon (No.74). Delivered on Sabbath Morning, April 13, 1856, by the. REV. CH SPURGEON. ... /.../spurgeon/spurgeons sermons volume 2 1856/a willing people and an.htm Whether God is Altogether Immutable? ... THE IMMUTABILITY OF GOD (TWO ARTICLES) Whether God is altogether immutable? Objection 1: It seems that God is not altogether immutable. ... /.../aquinas/summa theologica/whether god is altogether immutable.htm Whether to be Immutable Belongs to God Alone? ... THE IMMUTABILITY OF GOD (TWO ARTICLES) Whether to be immutable belongs to God alone? Objection 1: It seems that to be immutable does not belong to God alone. ... /.../aquinas/summa theologica/whether to be immutable belongs.htm God's Law Immutable. ... 25. GOD'S LAW IMMUTABLE. [Illustration: Chapter header.] "The temple of God was opened in heaven, and there was seen in His temple ... /.../white/the great controversy between christ and satan /25 gods law immutable.htm God's Law Immutable ... Chapter 25 God's Law Immutable. The temple of God was opened in heaven, and there was seen in His temple the ark of His testament." Revelation 11:19. ... /.../white/the great controversy/chapter 25 gods law immutable.htm Demonstrations by Syllogisms. That God the Word is Immutable. ... That God the Word is Immutable. 1. We have confessed one substance of the Father, of the Son, and of the Holy Ghost, and have agreed that it is immutable. ... /.../demonstrations by syllogisms that god.htm Love, Goodness, and Communication of Good, is the Immutable Glory ... ... Address 191: Love, goodness, and communication of good, is the immutable glory and perfection of the divine? Love, goodness, and ... /.../address 191 0 0 love goodness and.htm He Then Declares that the Close Relation Between Names and Things ... ... Section 2. He then declares that the close relation between names and things is immutable, and thereafter proceeds accordingly, in the most excellent manner ... /.../gregory/gregory of nyssa dogmatic treatises etc/section 2 he then declares.htm Thesaurus Immutable (1 Occurrence) ... (a.) Not mutable; unchangeable; unalterable. Int. Standard Bible Encyclopedia. IMMUTABILITY; IMMUTABLE. ... Multi-Version Concordance Immutable (1 Occurrence). ... /i/immutable.htm - 7k Immutability (1 Occurrence) ... Noah Webster's Dictionary (n.) The state or quality of being immutable; immutableness. Int. Standard Bible Encyclopedia. IMMUTABILITY; IMMUTABLE. ... /i/immutability.htm - 7k Unchangeableness (1 Occurrence) ... is unchangeable in His love and grace and power to save, but that is only because it is the love and grace and power of the absolute, infinite and immutable God ... /u/unchangeableness.htm - 27k Unchangeable (4 Occurrences) ... is unchangeable in His love and grace and power to save, but that is only because it is the love and grace and power of the absolute, infinite and immutable God ... /u/unchangeable.htm - 28k Imna (1 Occurrence) /i/imna.htm - 6k Fate (138 Occurrences) ... 1. (n.) A fixed decree by which the order of things is prescribed; the immutable law of the universe; inevitable necessity; the force by which all existence is ... /f/fate.htm - 37k Fled (181 Occurrences) ... Hebrews 6:18 that by two immutable things, in which it is impossible for God to lie, we may have a strong encouragement, who have fled for refuge to take hold ... /f/fled.htm - 38k Eternal (166 Occurrences) ... 3. (a.) Continued without intermission; perpetual; ceaseless; constant. 4. (a.) Existing at all times without change; immutable. ... /e/eternal.htm - 46k Predestination ... They teach that the eternal, sovereign, immutable, and unconditional decree or "determinate purpose" of God governs all events. ... /p/predestination.htm - 26k Brass (168 Occurrences) ... Dan. 2:39). The "mountains of brass" Zechariah (6:1) speaks of have been supposed to represent the immutable decrees of God. The ... /b/brass.htm - 44k Resources What is the immutability of God? | GotQuestions.org What is finite godism? | GotQuestions.org Does God have emotions? | GotQuestions.org Immutable: Dictionary and Thesaurus | Clyx.com Bible Concordance • Bible Dictionary • Bible Encyclopedia • Topical Bible • Bible Thesuarus Greek 276. ametathetos -- immutable, unchangeable ... immutable, unchangeable. Part of Speech: Adjective Transliteration: ametathetos Phonetic Spelling: (am-et-ath'-et-os) Short Definition: unchanged, unchangeable ... //strongsnumbers.com/greek2/276.htm - 7k 1982. episkiazo -- to overshadow ... 1982 ("overshadow") is used in the NT of -- which always brings His -plan to pass (see 1012 , "God's immutable will for "). Word ... //strongsnumbers.com/greek2/1982.htm - 7k 1012. boule -- counsel ... This level of God's (1012 ) demonstrates He is of history, ie always in charge! [1012 () is more than God's immutable plan of physical circumstances. ... //strongsnumbers.com/greek2/1012.htm - 7k 4819. sumbaino -- to come together, ie (of events) to come to pass ... should" (G. Archer). [That is, on the "-level of the will of God. See 1012 ("the Lord's immutable plan for ").]. Word Origin from ... //strongsnumbers.com/greek2/4819.htm - 8k Library Dialogue i. --The Immutable. ... Book V. Dialogue I."The Immutable. Orthodoxos and Eranistes. Orth."Better were it for us to agree and abide by the apostolic doctrine in its purity. ... /.../theodoret/the ecclesiastical history of theodoret/dialogue i the immutable.htm Whether Truth is Immutable? ... OF TRUTH (EIGHT ARTICLES) Whether truth is immutable? Objection 1: It seems that truth is immutable. For Augustine says (De Lib. ... Therefore truth is immutable. ... /...//christianbookshelf.org/aquinas/summa theologica/whether truth is immutable.htm A Willing People and an Immutable Leader ... A Willing People and an Immutable Leader. A Sermon (No.74). Delivered on Sabbath Morning, April 13, 1856, by the. REV. CH SPURGEON. ... /.../spurgeon/spurgeons sermons volume 2 1856/a willing people and an.htm Whether God is Altogether Immutable? ... THE IMMUTABILITY OF GOD (TWO ARTICLES) Whether God is altogether immutable? Objection 1: It seems that God is not altogether immutable. ... /.../aquinas/summa theologica/whether god is altogether immutable.htm Whether to be Immutable Belongs to God Alone? ... THE IMMUTABILITY OF GOD (TWO ARTICLES) Whether to be immutable belongs to God alone? Objection 1: It seems that to be immutable does not belong to God alone. ... /.../aquinas/summa theologica/whether to be immutable belongs.htm God's Law Immutable. ... 25. GOD'S LAW IMMUTABLE. [Illustration: Chapter header.] "The temple of God was opened in heaven, and there was seen in His temple ... /.../white/the great controversy between christ and satan /25 gods law immutable.htm God's Law Immutable ... Chapter 25 God's Law Immutable. The temple of God was opened in heaven, and there was seen in His temple the ark of His testament." Revelation 11:19. ... /.../white/the great controversy/chapter 25 gods law immutable.htm Demonstrations by Syllogisms. That God the Word is Immutable. ... That God the Word is Immutable. 1. We have confessed one substance of the Father, of the Son, and of the Holy Ghost, and have agreed that it is immutable. ... /.../demonstrations by syllogisms that god.htm Love, Goodness, and Communication of Good, is the Immutable Glory ... ... Address 191: Love, goodness, and communication of good, is the immutable glory and perfection of the divine? Love, goodness, and ... /.../address 191 0 0 love goodness and.htm He Then Declares that the Close Relation Between Names and Things ... ... Section 2. He then declares that the close relation between names and things is immutable, and thereafter proceeds accordingly, in the most excellent manner ... /.../gregory/gregory of nyssa dogmatic treatises etc/section 2 he then declares.htm Thesaurus Immutable (1 Occurrence) ... (a.) Not mutable; unchangeable; unalterable. Int. Standard Bible Encyclopedia. IMMUTABILITY; IMMUTABLE. ... Multi-Version Concordance Immutable (1 Occurrence). ... /i/immutable.htm - 7k Immutability (1 Occurrence) ... Noah Webster's Dictionary (n.) The state or quality of being immutable; immutableness. Int. Standard Bible Encyclopedia. IMMUTABILITY; IMMUTABLE. ... /i/immutability.htm - 7k Unchangeableness (1 Occurrence) ... is unchangeable in His love and grace and power to save, but that is only because it is the love and grace and power of the absolute, infinite and immutable God ... /u/unchangeableness.htm - 27k Unchangeable (4 Occurrences) ... is unchangeable in His love and grace and power to save, but that is only because it is the love and grace and power of the absolute, infinite and immutable God ... /u/unchangeable.htm - 28k Imna (1 Occurrence) /i/imna.htm - 6k Fate (138 Occurrences) ... 1. (n.) A fixed decree by which the order of things is prescribed; the immutable law of the universe; inevitable necessity; the force by which all existence is ... /f/fate.htm - 37k Fled (181 Occurrences) ... Hebrews 6:18 that by two immutable things, in which it is impossible for God to lie, we may have a strong encouragement, who have fled for refuge to take hold ... /f/fled.htm - 38k Eternal (166 Occurrences) ... 3. (a.) Continued without intermission; perpetual; ceaseless; constant. 4. (a.) Existing at all times without change; immutable. ... /e/eternal.htm - 46k Predestination ... They teach that the eternal, sovereign, immutable, and unconditional decree or "determinate purpose" of God governs all events. ... /p/predestination.htm - 26k Brass (168 Occurrences) ... Dan. 2:39). The "mountains of brass" Zechariah (6:1) speaks of have been supposed to represent the immutable decrees of God. The ... /b/brass.htm - 44k Resources What is the immutability of God? | GotQuestions.org What is finite godism? | GotQuestions.org Does God have emotions? | GotQuestions.org Immutable: Dictionary and Thesaurus | Clyx.com Bible Concordance • Bible Dictionary • Bible Encyclopedia • Topical Bible • Bible Thesuarus 1982. episkiazo -- to overshadow ... 1982 ("overshadow") is used in the NT of -- which always brings His -plan to pass (see 1012 , "God's immutable will for "). Word ... //strongsnumbers.com/greek2/1982.htm - 7k 1012. boule -- counsel ... This level of God's (1012 ) demonstrates He is of history, ie always in charge! [1012 () is more than God's immutable plan of physical circumstances. ... //strongsnumbers.com/greek2/1012.htm - 7k 4819. sumbaino -- to come together, ie (of events) to come to pass ... should" (G. Archer). [That is, on the "-level of the will of God. See 1012 ("the Lord's immutable plan for ").]. Word Origin from ... //strongsnumbers.com/greek2/4819.htm - 8k Library Dialogue i. --The Immutable. ... Book V. Dialogue I."The Immutable. Orthodoxos and Eranistes. Orth."Better were it for us to agree and abide by the apostolic doctrine in its purity. ... /.../theodoret/the ecclesiastical history of theodoret/dialogue i the immutable.htm Whether Truth is Immutable? ... OF TRUTH (EIGHT ARTICLES) Whether truth is immutable? Objection 1: It seems that truth is immutable. For Augustine says (De Lib. ... Therefore truth is immutable. ... /...//christianbookshelf.org/aquinas/summa theologica/whether truth is immutable.htm A Willing People and an Immutable Leader ... A Willing People and an Immutable Leader. A Sermon (No.74). Delivered on Sabbath Morning, April 13, 1856, by the. REV. CH SPURGEON. ... /.../spurgeon/spurgeons sermons volume 2 1856/a willing people and an.htm Whether God is Altogether Immutable? ... THE IMMUTABILITY OF GOD (TWO ARTICLES) Whether God is altogether immutable? Objection 1: It seems that God is not altogether immutable. ... /.../aquinas/summa theologica/whether god is altogether immutable.htm Whether to be Immutable Belongs to God Alone? ... THE IMMUTABILITY OF GOD (TWO ARTICLES) Whether to be immutable belongs to God alone? Objection 1: It seems that to be immutable does not belong to God alone. ... /.../aquinas/summa theologica/whether to be immutable belongs.htm God's Law Immutable. ... 25. GOD'S LAW IMMUTABLE. [Illustration: Chapter header.] "The temple of God was opened in heaven, and there was seen in His temple ... /.../white/the great controversy between christ and satan /25 gods law immutable.htm God's Law Immutable ... Chapter 25 God's Law Immutable. The temple of God was opened in heaven, and there was seen in His temple the ark of His testament." Revelation 11:19. ... /.../white/the great controversy/chapter 25 gods law immutable.htm Demonstrations by Syllogisms. That God the Word is Immutable. ... That God the Word is Immutable. 1. We have confessed one substance of the Father, of the Son, and of the Holy Ghost, and have agreed that it is immutable. ... /.../demonstrations by syllogisms that god.htm Love, Goodness, and Communication of Good, is the Immutable Glory ... ... Address 191: Love, goodness, and communication of good, is the immutable glory and perfection of the divine? Love, goodness, and ... /.../address 191 0 0 love goodness and.htm He Then Declares that the Close Relation Between Names and Things ... ... Section 2. He then declares that the close relation between names and things is immutable, and thereafter proceeds accordingly, in the most excellent manner ... /.../gregory/gregory of nyssa dogmatic treatises etc/section 2 he then declares.htm Thesaurus Immutable (1 Occurrence) ... (a.) Not mutable; unchangeable; unalterable. Int. Standard Bible Encyclopedia. IMMUTABILITY; IMMUTABLE. ... Multi-Version Concordance Immutable (1 Occurrence). ... /i/immutable.htm - 7k Immutability (1 Occurrence) ... Noah Webster's Dictionary (n.) The state or quality of being immutable; immutableness. Int. Standard Bible Encyclopedia. IMMUTABILITY; IMMUTABLE. ... /i/immutability.htm - 7k Unchangeableness (1 Occurrence) ... is unchangeable in His love and grace and power to save, but that is only because it is the love and grace and power of the absolute, infinite and immutable God ... /u/unchangeableness.htm - 27k Unchangeable (4 Occurrences) ... is unchangeable in His love and grace and power to save, but that is only because it is the love and grace and power of the absolute, infinite and immutable God ... /u/unchangeable.htm - 28k Imna (1 Occurrence) /i/imna.htm - 6k Fate (138 Occurrences) ... 1. (n.) A fixed decree by which the order of things is prescribed; the immutable law of the universe; inevitable necessity; the force by which all existence is ... /f/fate.htm - 37k Fled (181 Occurrences) ... Hebrews 6:18 that by two immutable things, in which it is impossible for God to lie, we may have a strong encouragement, who have fled for refuge to take hold ... /f/fled.htm - 38k Eternal (166 Occurrences) ... 3. (a.) Continued without intermission; perpetual; ceaseless; constant. 4. (a.) Existing at all times without change; immutable. ... /e/eternal.htm - 46k Predestination ... They teach that the eternal, sovereign, immutable, and unconditional decree or "determinate purpose" of God governs all events. ... /p/predestination.htm - 26k Brass (168 Occurrences) ... Dan. 2:39). The "mountains of brass" Zechariah (6:1) speaks of have been supposed to represent the immutable decrees of God. The ... /b/brass.htm - 44k Resources What is the immutability of God? | GotQuestions.org What is finite godism? | GotQuestions.org Does God have emotions? | GotQuestions.org Immutable: Dictionary and Thesaurus | Clyx.com Bible Concordance • Bible Dictionary • Bible Encyclopedia • Topical Bible • Bible Thesuarus 1012. boule -- counsel ... This level of God's (1012 ) demonstrates He is of history, ie always in charge! [1012 () is more than God's immutable plan of physical circumstances. ... //strongsnumbers.com/greek2/1012.htm - 7k 4819. sumbaino -- to come together, ie (of events) to come to pass ... should" (G. Archer). [That is, on the "-level of the will of God. See 1012 ("the Lord's immutable plan for ").]. Word Origin from ... //strongsnumbers.com/greek2/4819.htm - 8k Library Dialogue i. --The Immutable. ... Book V. Dialogue I."The Immutable. Orthodoxos and Eranistes. Orth."Better were it for us to agree and abide by the apostolic doctrine in its purity. ... /.../theodoret/the ecclesiastical history of theodoret/dialogue i the immutable.htm Whether Truth is Immutable? ... OF TRUTH (EIGHT ARTICLES) Whether truth is immutable? Objection 1: It seems that truth is immutable. For Augustine says (De Lib. ... Therefore truth is immutable. ... /...//christianbookshelf.org/aquinas/summa theologica/whether truth is immutable.htm A Willing People and an Immutable Leader ... A Willing People and an Immutable Leader. A Sermon (No.74). Delivered on Sabbath Morning, April 13, 1856, by the. REV. CH SPURGEON. ... /.../spurgeon/spurgeons sermons volume 2 1856/a willing people and an.htm Whether God is Altogether Immutable? ... THE IMMUTABILITY OF GOD (TWO ARTICLES) Whether God is altogether immutable? Objection 1: It seems that God is not altogether immutable. ... /.../aquinas/summa theologica/whether god is altogether immutable.htm Whether to be Immutable Belongs to God Alone? ... THE IMMUTABILITY OF GOD (TWO ARTICLES) Whether to be immutable belongs to God alone? Objection 1: It seems that to be immutable does not belong to God alone. ... /.../aquinas/summa theologica/whether to be immutable belongs.htm God's Law Immutable. ... 25. GOD'S LAW IMMUTABLE. [Illustration: Chapter header.] "The temple of God was opened in heaven, and there was seen in His temple ... /.../white/the great controversy between christ and satan /25 gods law immutable.htm God's Law Immutable ... Chapter 25 God's Law Immutable. The temple of God was opened in heaven, and there was seen in His temple the ark of His testament." Revelation 11:19. ... /.../white/the great controversy/chapter 25 gods law immutable.htm Demonstrations by Syllogisms. That God the Word is Immutable. ... That God the Word is Immutable. 1. We have confessed one substance of the Father, of the Son, and of the Holy Ghost, and have agreed that it is immutable. ... /.../demonstrations by syllogisms that god.htm Love, Goodness, and Communication of Good, is the Immutable Glory ... ... Address 191: Love, goodness, and communication of good, is the immutable glory and perfection of the divine? Love, goodness, and ... /.../address 191 0 0 love goodness and.htm He Then Declares that the Close Relation Between Names and Things ... ... Section 2. He then declares that the close relation between names and things is immutable, and thereafter proceeds accordingly, in the most excellent manner ... /.../gregory/gregory of nyssa dogmatic treatises etc/section 2 he then declares.htm Thesaurus Immutable (1 Occurrence) ... (a.) Not mutable; unchangeable; unalterable. Int. Standard Bible Encyclopedia. IMMUTABILITY; IMMUTABLE. ... Multi-Version Concordance Immutable (1 Occurrence). ... /i/immutable.htm - 7k Immutability (1 Occurrence) ... Noah Webster's Dictionary (n.) The state or quality of being immutable; immutableness. Int. Standard Bible Encyclopedia. IMMUTABILITY; IMMUTABLE. ... /i/immutability.htm - 7k Unchangeableness (1 Occurrence) ... is unchangeable in His love and grace and power to save, but that is only because it is the love and grace and power of the absolute, infinite and immutable God ... /u/unchangeableness.htm - 27k Unchangeable (4 Occurrences) ... is unchangeable in His love and grace and power to save, but that is only because it is the love and grace and power of the absolute, infinite and immutable God ... /u/unchangeable.htm - 28k Imna (1 Occurrence) /i/imna.htm - 6k Fate (138 Occurrences) ... 1. (n.) A fixed decree by which the order of things is prescribed; the immutable law of the universe; inevitable necessity; the force by which all existence is ... /f/fate.htm - 37k Fled (181 Occurrences) ... Hebrews 6:18 that by two immutable things, in which it is impossible for God to lie, we may have a strong encouragement, who have fled for refuge to take hold ... /f/fled.htm - 38k Eternal (166 Occurrences) ... 3. (a.) Continued without intermission; perpetual; ceaseless; constant. 4. (a.) Existing at all times without change; immutable. ... /e/eternal.htm - 46k Predestination ... They teach that the eternal, sovereign, immutable, and unconditional decree or "determinate purpose" of God governs all events. ... /p/predestination.htm - 26k Brass (168 Occurrences) ... Dan. 2:39). The "mountains of brass" Zechariah (6:1) speaks of have been supposed to represent the immutable decrees of God. The ... /b/brass.htm - 44k Resources What is the immutability of God? | GotQuestions.org What is finite godism? | GotQuestions.org Does God have emotions? | GotQuestions.org Immutable: Dictionary and Thesaurus | Clyx.com Bible Concordance • Bible Dictionary • Bible Encyclopedia • Topical Bible • Bible Thesuarus 4819. sumbaino -- to come together, ie (of events) to come to pass ... should" (G. Archer). [That is, on the "-level of the will of God. See 1012 ("the Lord's immutable plan for ").]. Word Origin from ... //strongsnumbers.com/greek2/4819.htm - 8k Library Dialogue i. --The Immutable. ... Book V. Dialogue I."The Immutable. Orthodoxos and Eranistes. Orth."Better were it for us to agree and abide by the apostolic doctrine in its purity. ... /.../theodoret/the ecclesiastical history of theodoret/dialogue i the immutable.htm Whether Truth is Immutable? ... OF TRUTH (EIGHT ARTICLES) Whether truth is immutable? Objection 1: It seems that truth is immutable. For Augustine says (De Lib. ... Therefore truth is immutable. ... /...//christianbookshelf.org/aquinas/summa theologica/whether truth is immutable.htm A Willing People and an Immutable Leader ... A Willing People and an Immutable Leader. A Sermon (No.74). Delivered on Sabbath Morning, April 13, 1856, by the. REV. CH SPURGEON. ... /.../spurgeon/spurgeons sermons volume 2 1856/a willing people and an.htm Whether God is Altogether Immutable? ... THE IMMUTABILITY OF GOD (TWO ARTICLES) Whether God is altogether immutable? Objection 1: It seems that God is not altogether immutable. ... /.../aquinas/summa theologica/whether god is altogether immutable.htm Whether to be Immutable Belongs to God Alone? ... THE IMMUTABILITY OF GOD (TWO ARTICLES) Whether to be immutable belongs to God alone? Objection 1: It seems that to be immutable does not belong to God alone. ... /.../aquinas/summa theologica/whether to be immutable belongs.htm God's Law Immutable. ... 25. GOD'S LAW IMMUTABLE. [Illustration: Chapter header.] "The temple of God was opened in heaven, and there was seen in His temple ... /.../white/the great controversy between christ and satan /25 gods law immutable.htm God's Law Immutable ... Chapter 25 God's Law Immutable. The temple of God was opened in heaven, and there was seen in His temple the ark of His testament." Revelation 11:19. ... /.../white/the great controversy/chapter 25 gods law immutable.htm Demonstrations by Syllogisms. That God the Word is Immutable. ... That God the Word is Immutable. 1. We have confessed one substance of the Father, of the Son, and of the Holy Ghost, and have agreed that it is immutable. ... /.../demonstrations by syllogisms that god.htm Love, Goodness, and Communication of Good, is the Immutable Glory ... ... Address 191: Love, goodness, and communication of good, is the immutable glory and perfection of the divine? Love, goodness, and ... /.../address 191 0 0 love goodness and.htm He Then Declares that the Close Relation Between Names and Things ... ... Section 2. He then declares that the close relation between names and things is immutable, and thereafter proceeds accordingly, in the most excellent manner ... /.../gregory/gregory of nyssa dogmatic treatises etc/section 2 he then declares.htm Thesaurus Immutable (1 Occurrence) ... (a.) Not mutable; unchangeable; unalterable. Int. Standard Bible Encyclopedia. IMMUTABILITY; IMMUTABLE. ... Multi-Version Concordance Immutable (1 Occurrence). ... /i/immutable.htm - 7k Immutability (1 Occurrence) ... Noah Webster's Dictionary (n.) The state or quality of being immutable; immutableness. Int. Standard Bible Encyclopedia. IMMUTABILITY; IMMUTABLE. ... /i/immutability.htm - 7k Unchangeableness (1 Occurrence) ... is unchangeable in His love and grace and power to save, but that is only because it is the love and grace and power of the absolute, infinite and immutable God ... /u/unchangeableness.htm - 27k Unchangeable (4 Occurrences) ... is unchangeable in His love and grace and power to save, but that is only because it is the love and grace and power of the absolute, infinite and immutable God ... /u/unchangeable.htm - 28k Imna (1 Occurrence) /i/imna.htm - 6k Fate (138 Occurrences) ... 1. (n.) A fixed decree by which the order of things is prescribed; the immutable law of the universe; inevitable necessity; the force by which all existence is ... /f/fate.htm - 37k Fled (181 Occurrences) ... Hebrews 6:18 that by two immutable things, in which it is impossible for God to lie, we may have a strong encouragement, who have fled for refuge to take hold ... /f/fled.htm - 38k Eternal (166 Occurrences) ... 3. (a.) Continued without intermission; perpetual; ceaseless; constant. 4. (a.) Existing at all times without change; immutable. ... /e/eternal.htm - 46k Predestination ... They teach that the eternal, sovereign, immutable, and unconditional decree or "determinate purpose" of God governs all events. ... /p/predestination.htm - 26k Brass (168 Occurrences) ... Dan. 2:39). The "mountains of brass" Zechariah (6:1) speaks of have been supposed to represent the immutable decrees of God. The ... /b/brass.htm - 44k Resources What is the immutability of God? | GotQuestions.org What is finite godism? | GotQuestions.org Does God have emotions? | GotQuestions.org Immutable: Dictionary and Thesaurus | Clyx.com Bible Concordance • Bible Dictionary • Bible Encyclopedia • Topical Bible • Bible Thesuarus Dialogue i. --The Immutable. ... Book V. Dialogue I."The Immutable. Orthodoxos and Eranistes. Orth."Better were it for us to agree and abide by the apostolic doctrine in its purity. ... /.../theodoret/the ecclesiastical history of theodoret/dialogue i the immutable.htm Whether Truth is Immutable? ... OF TRUTH (EIGHT ARTICLES) Whether truth is immutable? Objection 1: It seems that truth is immutable. For Augustine says (De Lib. ... Therefore truth is immutable. ... /...//christianbookshelf.org/aquinas/summa theologica/whether truth is immutable.htm A Willing People and an Immutable Leader ... A Willing People and an Immutable Leader. A Sermon (No.74). Delivered on Sabbath Morning, April 13, 1856, by the. REV. CH SPURGEON. ... /.../spurgeon/spurgeons sermons volume 2 1856/a willing people and an.htm Whether God is Altogether Immutable? ... THE IMMUTABILITY OF GOD (TWO ARTICLES) Whether God is altogether immutable? Objection 1: It seems that God is not altogether immutable. ... /.../aquinas/summa theologica/whether god is altogether immutable.htm Whether to be Immutable Belongs to God Alone? ... THE IMMUTABILITY OF GOD (TWO ARTICLES) Whether to be immutable belongs to God alone? Objection 1: It seems that to be immutable does not belong to God alone. ... /.../aquinas/summa theologica/whether to be immutable belongs.htm God's Law Immutable. ... 25. GOD'S LAW IMMUTABLE. [Illustration: Chapter header.] "The temple of God was opened in heaven, and there was seen in His temple ... /.../white/the great controversy between christ and satan /25 gods law immutable.htm God's Law Immutable ... Chapter 25 God's Law Immutable. The temple of God was opened in heaven, and there was seen in His temple the ark of His testament." Revelation 11:19. ... /.../white/the great controversy/chapter 25 gods law immutable.htm Demonstrations by Syllogisms. That God the Word is Immutable. ... That God the Word is Immutable. 1. We have confessed one substance of the Father, of the Son, and of the Holy Ghost, and have agreed that it is immutable. ... /.../demonstrations by syllogisms that god.htm Love, Goodness, and Communication of Good, is the Immutable Glory ... ... Address 191: Love, goodness, and communication of good, is the immutable glory and perfection of the divine? Love, goodness, and ... /.../address 191 0 0 love goodness and.htm He Then Declares that the Close Relation Between Names and Things ... ... Section 2. He then declares that the close relation between names and things is immutable, and thereafter proceeds accordingly, in the most excellent manner ... /.../gregory/gregory of nyssa dogmatic treatises etc/section 2 he then declares.htm Thesaurus Immutable (1 Occurrence) ... (a.) Not mutable; unchangeable; unalterable. Int. Standard Bible Encyclopedia. IMMUTABILITY; IMMUTABLE. ... Multi-Version Concordance Immutable (1 Occurrence). ... /i/immutable.htm - 7k Immutability (1 Occurrence) ... Noah Webster's Dictionary (n.) The state or quality of being immutable; immutableness. Int. Standard Bible Encyclopedia. IMMUTABILITY; IMMUTABLE. ... /i/immutability.htm - 7k Unchangeableness (1 Occurrence) ... is unchangeable in His love and grace and power to save, but that is only because it is the love and grace and power of the absolute, infinite and immutable God ... /u/unchangeableness.htm - 27k Unchangeable (4 Occurrences) ... is unchangeable in His love and grace and power to save, but that is only because it is the love and grace and power of the absolute, infinite and immutable God ... /u/unchangeable.htm - 28k Imna (1 Occurrence) /i/imna.htm - 6k Fate (138 Occurrences) ... 1. (n.) A fixed decree by which the order of things is prescribed; the immutable law of the universe; inevitable necessity; the force by which all existence is ... /f/fate.htm - 37k Fled (181 Occurrences) ... Hebrews 6:18 that by two immutable things, in which it is impossible for God to lie, we may have a strong encouragement, who have fled for refuge to take hold ... /f/fled.htm - 38k Eternal (166 Occurrences) ... 3. (a.) Continued without intermission; perpetual; ceaseless; constant. 4. (a.) Existing at all times without change; immutable. ... /e/eternal.htm - 46k Predestination ... They teach that the eternal, sovereign, immutable, and unconditional decree or "determinate purpose" of God governs all events. ... /p/predestination.htm - 26k Brass (168 Occurrences) ... Dan. 2:39). The "mountains of brass" Zechariah (6:1) speaks of have been supposed to represent the immutable decrees of God. The ... /b/brass.htm - 44k Resources What is the immutability of God? | GotQuestions.org What is finite godism? | GotQuestions.org Does God have emotions? | GotQuestions.org Immutable: Dictionary and Thesaurus | Clyx.com Bible Concordance • Bible Dictionary • Bible Encyclopedia • Topical Bible • Bible Thesuarus Whether Truth is Immutable? ... OF TRUTH (EIGHT ARTICLES) Whether truth is immutable? Objection 1: It seems that truth is immutable. For Augustine says (De Lib. ... Therefore truth is immutable. ... /...//christianbookshelf.org/aquinas/summa theologica/whether truth is immutable.htm A Willing People and an Immutable Leader ... A Willing People and an Immutable Leader. A Sermon (No.74). Delivered on Sabbath Morning, April 13, 1856, by the. REV. CH SPURGEON. ... /.../spurgeon/spurgeons sermons volume 2 1856/a willing people and an.htm Whether God is Altogether Immutable? ... THE IMMUTABILITY OF GOD (TWO ARTICLES) Whether God is altogether immutable? Objection 1: It seems that God is not altogether immutable. ... /.../aquinas/summa theologica/whether god is altogether immutable.htm Whether to be Immutable Belongs to God Alone? ... THE IMMUTABILITY OF GOD (TWO ARTICLES) Whether to be immutable belongs to God alone? Objection 1: It seems that to be immutable does not belong to God alone. ... /.../aquinas/summa theologica/whether to be immutable belongs.htm God's Law Immutable. ... 25. GOD'S LAW IMMUTABLE. [Illustration: Chapter header.] "The temple of God was opened in heaven, and there was seen in His temple ... /.../white/the great controversy between christ and satan /25 gods law immutable.htm God's Law Immutable ... Chapter 25 God's Law Immutable. The temple of God was opened in heaven, and there was seen in His temple the ark of His testament." Revelation 11:19. ... /.../white/the great controversy/chapter 25 gods law immutable.htm Demonstrations by Syllogisms. That God the Word is Immutable. ... That God the Word is Immutable. 1. We have confessed one substance of the Father, of the Son, and of the Holy Ghost, and have agreed that it is immutable. ... /.../demonstrations by syllogisms that god.htm Love, Goodness, and Communication of Good, is the Immutable Glory ... ... Address 191: Love, goodness, and communication of good, is the immutable glory and perfection of the divine? Love, goodness, and ... /.../address 191 0 0 love goodness and.htm He Then Declares that the Close Relation Between Names and Things ... ... Section 2. He then declares that the close relation between names and things is immutable, and thereafter proceeds accordingly, in the most excellent manner ... /.../gregory/gregory of nyssa dogmatic treatises etc/section 2 he then declares.htm Thesaurus Immutable (1 Occurrence) ... (a.) Not mutable; unchangeable; unalterable. Int. Standard Bible Encyclopedia. IMMUTABILITY; IMMUTABLE. ... Multi-Version Concordance Immutable (1 Occurrence). ... /i/immutable.htm - 7k Immutability (1 Occurrence) ... Noah Webster's Dictionary (n.) The state or quality of being immutable; immutableness. Int. Standard Bible Encyclopedia. IMMUTABILITY; IMMUTABLE. ... /i/immutability.htm - 7k Unchangeableness (1 Occurrence) ... is unchangeable in His love and grace and power to save, but that is only because it is the love and grace and power of the absolute, infinite and immutable God ... /u/unchangeableness.htm - 27k Unchangeable (4 Occurrences) ... is unchangeable in His love and grace and power to save, but that is only because it is the love and grace and power of the absolute, infinite and immutable God ... /u/unchangeable.htm - 28k Imna (1 Occurrence) /i/imna.htm - 6k Fate (138 Occurrences) ... 1. (n.) A fixed decree by which the order of things is prescribed; the immutable law of the universe; inevitable necessity; the force by which all existence is ... /f/fate.htm - 37k Fled (181 Occurrences) ... Hebrews 6:18 that by two immutable things, in which it is impossible for God to lie, we may have a strong encouragement, who have fled for refuge to take hold ... /f/fled.htm - 38k Eternal (166 Occurrences) ... 3. (a.) Continued without intermission; perpetual; ceaseless; constant. 4. (a.) Existing at all times without change; immutable. ... /e/eternal.htm - 46k Predestination ... They teach that the eternal, sovereign, immutable, and unconditional decree or "determinate purpose" of God governs all events. ... /p/predestination.htm - 26k Brass (168 Occurrences) ... Dan. 2:39). The "mountains of brass" Zechariah (6:1) speaks of have been supposed to represent the immutable decrees of God. The ... /b/brass.htm - 44k Resources What is the immutability of God? | GotQuestions.org What is finite godism? | GotQuestions.org Does God have emotions? | GotQuestions.org Immutable: Dictionary and Thesaurus | Clyx.com Bible Concordance • Bible Dictionary • Bible Encyclopedia • Topical Bible • Bible Thesuarus A Willing People and an Immutable Leader ... A Willing People and an Immutable Leader. A Sermon (No.74). Delivered on Sabbath Morning, April 13, 1856, by the. REV. CH SPURGEON. ... /.../spurgeon/spurgeons sermons volume 2 1856/a willing people and an.htm Whether God is Altogether Immutable? ... THE IMMUTABILITY OF GOD (TWO ARTICLES) Whether God is altogether immutable? Objection 1: It seems that God is not altogether immutable. ... /.../aquinas/summa theologica/whether god is altogether immutable.htm Whether to be Immutable Belongs to God Alone? ... THE IMMUTABILITY OF GOD (TWO ARTICLES) Whether to be immutable belongs to God alone? Objection 1: It seems that to be immutable does not belong to God alone. ... /.../aquinas/summa theologica/whether to be immutable belongs.htm God's Law Immutable. ... 25. GOD'S LAW IMMUTABLE. [Illustration: Chapter header.] "The temple of God was opened in heaven, and there was seen in His temple ... /.../white/the great controversy between christ and satan /25 gods law immutable.htm God's Law Immutable ... Chapter 25 God's Law Immutable. The temple of God was opened in heaven, and there was seen in His temple the ark of His testament." Revelation 11:19. ... /.../white/the great controversy/chapter 25 gods law immutable.htm Demonstrations by Syllogisms. That God the Word is Immutable. ... That God the Word is Immutable. 1. We have confessed one substance of the Father, of the Son, and of the Holy Ghost, and have agreed that it is immutable. ... /.../demonstrations by syllogisms that god.htm Love, Goodness, and Communication of Good, is the Immutable Glory ... ... Address 191: Love, goodness, and communication of good, is the immutable glory and perfection of the divine? Love, goodness, and ... /.../address 191 0 0 love goodness and.htm He Then Declares that the Close Relation Between Names and Things ... ... Section 2. He then declares that the close relation between names and things is immutable, and thereafter proceeds accordingly, in the most excellent manner ... /.../gregory/gregory of nyssa dogmatic treatises etc/section 2 he then declares.htm Thesaurus Immutable (1 Occurrence) ... (a.) Not mutable; unchangeable; unalterable. Int. Standard Bible Encyclopedia. IMMUTABILITY; IMMUTABLE. ... Multi-Version Concordance Immutable (1 Occurrence). ... /i/immutable.htm - 7k Immutability (1 Occurrence) ... Noah Webster's Dictionary (n.) The state or quality of being immutable; immutableness. Int. Standard Bible Encyclopedia. IMMUTABILITY; IMMUTABLE. ... /i/immutability.htm - 7k Unchangeableness (1 Occurrence) ... is unchangeable in His love and grace and power to save, but that is only because it is the love and grace and power of the absolute, infinite and immutable God ... /u/unchangeableness.htm - 27k Unchangeable (4 Occurrences) ... is unchangeable in His love and grace and power to save, but that is only because it is the love and grace and power of the absolute, infinite and immutable God ... /u/unchangeable.htm - 28k Imna (1 Occurrence) /i/imna.htm - 6k Fate (138 Occurrences) ... 1. (n.) A fixed decree by which the order of things is prescribed; the immutable law of the universe; inevitable necessity; the force by which all existence is ... /f/fate.htm - 37k Fled (181 Occurrences) ... Hebrews 6:18 that by two immutable things, in which it is impossible for God to lie, we may have a strong encouragement, who have fled for refuge to take hold ... /f/fled.htm - 38k Eternal (166 Occurrences) ... 3. (a.) Continued without intermission; perpetual; ceaseless; constant. 4. (a.) Existing at all times without change; immutable. ... /e/eternal.htm - 46k Predestination ... They teach that the eternal, sovereign, immutable, and unconditional decree or "determinate purpose" of God governs all events. ... /p/predestination.htm - 26k Brass (168 Occurrences) ... Dan. 2:39). The "mountains of brass" Zechariah (6:1) speaks of have been supposed to represent the immutable decrees of God. The ... /b/brass.htm - 44k Resources What is the immutability of God? | GotQuestions.org What is finite godism? | GotQuestions.org Does God have emotions? | GotQuestions.org Immutable: Dictionary and Thesaurus | Clyx.com Bible Concordance • Bible Dictionary • Bible Encyclopedia • Topical Bible • Bible Thesuarus Whether God is Altogether Immutable? ... THE IMMUTABILITY OF GOD (TWO ARTICLES) Whether God is altogether immutable? Objection 1: It seems that God is not altogether immutable. ... /.../aquinas/summa theologica/whether god is altogether immutable.htm Whether to be Immutable Belongs to God Alone? ... THE IMMUTABILITY OF GOD (TWO ARTICLES) Whether to be immutable belongs to God alone? Objection 1: It seems that to be immutable does not belong to God alone. ... /.../aquinas/summa theologica/whether to be immutable belongs.htm God's Law Immutable. ... 25. GOD'S LAW IMMUTABLE. [Illustration: Chapter header.] "The temple of God was opened in heaven, and there was seen in His temple ... /.../white/the great controversy between christ and satan /25 gods law immutable.htm God's Law Immutable ... Chapter 25 God's Law Immutable. The temple of God was opened in heaven, and there was seen in His temple the ark of His testament." Revelation 11:19. ... /.../white/the great controversy/chapter 25 gods law immutable.htm Demonstrations by Syllogisms. That God the Word is Immutable. ... That God the Word is Immutable. 1. We have confessed one substance of the Father, of the Son, and of the Holy Ghost, and have agreed that it is immutable. ... /.../demonstrations by syllogisms that god.htm Love, Goodness, and Communication of Good, is the Immutable Glory ... ... Address 191: Love, goodness, and communication of good, is the immutable glory and perfection of the divine? Love, goodness, and ... /.../address 191 0 0 love goodness and.htm He Then Declares that the Close Relation Between Names and Things ... ... Section 2. He then declares that the close relation between names and things is immutable, and thereafter proceeds accordingly, in the most excellent manner ... /.../gregory/gregory of nyssa dogmatic treatises etc/section 2 he then declares.htm Thesaurus Immutable (1 Occurrence) ... (a.) Not mutable; unchangeable; unalterable. Int. Standard Bible Encyclopedia. IMMUTABILITY; IMMUTABLE. ... Multi-Version Concordance Immutable (1 Occurrence). ... /i/immutable.htm - 7k Immutability (1 Occurrence) ... Noah Webster's Dictionary (n.) The state or quality of being immutable; immutableness. Int. Standard Bible Encyclopedia. IMMUTABILITY; IMMUTABLE. ... /i/immutability.htm - 7k Unchangeableness (1 Occurrence) ... is unchangeable in His love and grace and power to save, but that is only because it is the love and grace and power of the absolute, infinite and immutable God ... /u/unchangeableness.htm - 27k Unchangeable (4 Occurrences) ... is unchangeable in His love and grace and power to save, but that is only because it is the love and grace and power of the absolute, infinite and immutable God ... /u/unchangeable.htm - 28k Imna (1 Occurrence) /i/imna.htm - 6k Fate (138 Occurrences) ... 1. (n.) A fixed decree by which the order of things is prescribed; the immutable law of the universe; inevitable necessity; the force by which all existence is ... /f/fate.htm - 37k Fled (181 Occurrences) ... Hebrews 6:18 that by two immutable things, in which it is impossible for God to lie, we may have a strong encouragement, who have fled for refuge to take hold ... /f/fled.htm - 38k Eternal (166 Occurrences) ... 3. (a.) Continued without intermission; perpetual; ceaseless; constant. 4. (a.) Existing at all times without change; immutable. ... /e/eternal.htm - 46k Predestination ... They teach that the eternal, sovereign, immutable, and unconditional decree or "determinate purpose" of God governs all events. ... /p/predestination.htm - 26k Brass (168 Occurrences) ... Dan. 2:39). The "mountains of brass" Zechariah (6:1) speaks of have been supposed to represent the immutable decrees of God. The ... /b/brass.htm - 44k Resources What is the immutability of God? | GotQuestions.org What is finite godism? | GotQuestions.org Does God have emotions? | GotQuestions.org Immutable: Dictionary and Thesaurus | Clyx.com Bible Concordance • Bible Dictionary • Bible Encyclopedia • Topical Bible • Bible Thesuarus Whether to be Immutable Belongs to God Alone? ... THE IMMUTABILITY OF GOD (TWO ARTICLES) Whether to be immutable belongs to God alone? Objection 1: It seems that to be immutable does not belong to God alone. ... /.../aquinas/summa theologica/whether to be immutable belongs.htm God's Law Immutable. ... 25. GOD'S LAW IMMUTABLE. [Illustration: Chapter header.] "The temple of God was opened in heaven, and there was seen in His temple ... /.../white/the great controversy between christ and satan /25 gods law immutable.htm God's Law Immutable ... Chapter 25 God's Law Immutable. The temple of God was opened in heaven, and there was seen in His temple the ark of His testament." Revelation 11:19. ... /.../white/the great controversy/chapter 25 gods law immutable.htm Demonstrations by Syllogisms. That God the Word is Immutable. ... That God the Word is Immutable. 1. We have confessed one substance of the Father, of the Son, and of the Holy Ghost, and have agreed that it is immutable. ... /.../demonstrations by syllogisms that god.htm Love, Goodness, and Communication of Good, is the Immutable Glory ... ... Address 191: Love, goodness, and communication of good, is the immutable glory and perfection of the divine? Love, goodness, and ... /.../address 191 0 0 love goodness and.htm He Then Declares that the Close Relation Between Names and Things ... ... Section 2. He then declares that the close relation between names and things is immutable, and thereafter proceeds accordingly, in the most excellent manner ... /.../gregory/gregory of nyssa dogmatic treatises etc/section 2 he then declares.htm Thesaurus Immutable (1 Occurrence) ... (a.) Not mutable; unchangeable; unalterable. Int. Standard Bible Encyclopedia. IMMUTABILITY; IMMUTABLE. ... Multi-Version Concordance Immutable (1 Occurrence). ... /i/immutable.htm - 7k Immutability (1 Occurrence) ... Noah Webster's Dictionary (n.) The state or quality of being immutable; immutableness. Int. Standard Bible Encyclopedia. IMMUTABILITY; IMMUTABLE. ... /i/immutability.htm - 7k Unchangeableness (1 Occurrence) ... is unchangeable in His love and grace and power to save, but that is only because it is the love and grace and power of the absolute, infinite and immutable God ... /u/unchangeableness.htm - 27k Unchangeable (4 Occurrences) ... is unchangeable in His love and grace and power to save, but that is only because it is the love and grace and power of the absolute, infinite and immutable God ... /u/unchangeable.htm - 28k Imna (1 Occurrence) /i/imna.htm - 6k Fate (138 Occurrences) ... 1. (n.) A fixed decree by which the order of things is prescribed; the immutable law of the universe; inevitable necessity; the force by which all existence is ... /f/fate.htm - 37k Fled (181 Occurrences) ... Hebrews 6:18 that by two immutable things, in which it is impossible for God to lie, we may have a strong encouragement, who have fled for refuge to take hold ... /f/fled.htm - 38k Eternal (166 Occurrences) ... 3. (a.) Continued without intermission; perpetual; ceaseless; constant. 4. (a.) Existing at all times without change; immutable. ... /e/eternal.htm - 46k Predestination ... They teach that the eternal, sovereign, immutable, and unconditional decree or "determinate purpose" of God governs all events. ... /p/predestination.htm - 26k Brass (168 Occurrences) ... Dan. 2:39). The "mountains of brass" Zechariah (6:1) speaks of have been supposed to represent the immutable decrees of God. The ... /b/brass.htm - 44k Resources What is the immutability of God? | GotQuestions.org What is finite godism? | GotQuestions.org Does God have emotions? | GotQuestions.org Immutable: Dictionary and Thesaurus | Clyx.com Bible Concordance • Bible Dictionary • Bible Encyclopedia • Topical Bible • Bible Thesuarus God's Law Immutable. ... 25. GOD'S LAW IMMUTABLE. [Illustration: Chapter header.] "The temple of God was opened in heaven, and there was seen in His temple ... /.../white/the great controversy between christ and satan /25 gods law immutable.htm God's Law Immutable ... Chapter 25 God's Law Immutable. The temple of God was opened in heaven, and there was seen in His temple the ark of His testament." Revelation 11:19. ... /.../white/the great controversy/chapter 25 gods law immutable.htm Demonstrations by Syllogisms. That God the Word is Immutable. ... That God the Word is Immutable. 1. We have confessed one substance of the Father, of the Son, and of the Holy Ghost, and have agreed that it is immutable. ... /.../demonstrations by syllogisms that god.htm Love, Goodness, and Communication of Good, is the Immutable Glory ... ... Address 191: Love, goodness, and communication of good, is the immutable glory and perfection of the divine? Love, goodness, and ... /.../address 191 0 0 love goodness and.htm He Then Declares that the Close Relation Between Names and Things ... ... Section 2. He then declares that the close relation between names and things is immutable, and thereafter proceeds accordingly, in the most excellent manner ... /.../gregory/gregory of nyssa dogmatic treatises etc/section 2 he then declares.htm Thesaurus Immutable (1 Occurrence) ... (a.) Not mutable; unchangeable; unalterable. Int. Standard Bible Encyclopedia. IMMUTABILITY; IMMUTABLE. ... Multi-Version Concordance Immutable (1 Occurrence). ... /i/immutable.htm - 7k Immutability (1 Occurrence) ... Noah Webster's Dictionary (n.) The state or quality of being immutable; immutableness. Int. Standard Bible Encyclopedia. IMMUTABILITY; IMMUTABLE. ... /i/immutability.htm - 7k Unchangeableness (1 Occurrence) ... is unchangeable in His love and grace and power to save, but that is only because it is the love and grace and power of the absolute, infinite and immutable God ... /u/unchangeableness.htm - 27k Unchangeable (4 Occurrences) ... is unchangeable in His love and grace and power to save, but that is only because it is the love and grace and power of the absolute, infinite and immutable God ... /u/unchangeable.htm - 28k Imna (1 Occurrence) /i/imna.htm - 6k Fate (138 Occurrences) ... 1. (n.) A fixed decree by which the order of things is prescribed; the immutable law of the universe; inevitable necessity; the force by which all existence is ... /f/fate.htm - 37k Fled (181 Occurrences) ... Hebrews 6:18 that by two immutable things, in which it is impossible for God to lie, we may have a strong encouragement, who have fled for refuge to take hold ... /f/fled.htm - 38k Eternal (166 Occurrences) ... 3. (a.) Continued without intermission; perpetual; ceaseless; constant. 4. (a.) Existing at all times without change; immutable. ... /e/eternal.htm - 46k Predestination ... They teach that the eternal, sovereign, immutable, and unconditional decree or "determinate purpose" of God governs all events. ... /p/predestination.htm - 26k Brass (168 Occurrences) ... Dan. 2:39). The "mountains of brass" Zechariah (6:1) speaks of have been supposed to represent the immutable decrees of God. The ... /b/brass.htm - 44k Resources What is the immutability of God? | GotQuestions.org What is finite godism? | GotQuestions.org Does God have emotions? | GotQuestions.org Immutable: Dictionary and Thesaurus | Clyx.com Bible Concordance • Bible Dictionary • Bible Encyclopedia • Topical Bible • Bible Thesuarus God's Law Immutable ... Chapter 25 God's Law Immutable. The temple of God was opened in heaven, and there was seen in His temple the ark of His testament." Revelation 11:19. ... /.../white/the great controversy/chapter 25 gods law immutable.htm Demonstrations by Syllogisms. That God the Word is Immutable. ... That God the Word is Immutable. 1. We have confessed one substance of the Father, of the Son, and of the Holy Ghost, and have agreed that it is immutable. ... /.../demonstrations by syllogisms that god.htm Love, Goodness, and Communication of Good, is the Immutable Glory ... ... Address 191: Love, goodness, and communication of good, is the immutable glory and perfection of the divine? Love, goodness, and ... /.../address 191 0 0 love goodness and.htm He Then Declares that the Close Relation Between Names and Things ... ... Section 2. He then declares that the close relation between names and things is immutable, and thereafter proceeds accordingly, in the most excellent manner ... /.../gregory/gregory of nyssa dogmatic treatises etc/section 2 he then declares.htm Thesaurus Immutable (1 Occurrence) ... (a.) Not mutable; unchangeable; unalterable. Int. Standard Bible Encyclopedia. IMMUTABILITY; IMMUTABLE. ... Multi-Version Concordance Immutable (1 Occurrence). ... /i/immutable.htm - 7k Immutability (1 Occurrence) ... Noah Webster's Dictionary (n.) The state or quality of being immutable; immutableness. Int. Standard Bible Encyclopedia. IMMUTABILITY; IMMUTABLE. ... /i/immutability.htm - 7k Unchangeableness (1 Occurrence) ... is unchangeable in His love and grace and power to save, but that is only because it is the love and grace and power of the absolute, infinite and immutable God ... /u/unchangeableness.htm - 27k Unchangeable (4 Occurrences) ... is unchangeable in His love and grace and power to save, but that is only because it is the love and grace and power of the absolute, infinite and immutable God ... /u/unchangeable.htm - 28k Imna (1 Occurrence) /i/imna.htm - 6k Fate (138 Occurrences) ... 1. (n.) A fixed decree by which the order of things is prescribed; the immutable law of the universe; inevitable necessity; the force by which all existence is ... /f/fate.htm - 37k Fled (181 Occurrences) ... Hebrews 6:18 that by two immutable things, in which it is impossible for God to lie, we may have a strong encouragement, who have fled for refuge to take hold ... /f/fled.htm - 38k Eternal (166 Occurrences) ... 3. (a.) Continued without intermission; perpetual; ceaseless; constant. 4. (a.) Existing at all times without change; immutable. ... /e/eternal.htm - 46k Predestination ... They teach that the eternal, sovereign, immutable, and unconditional decree or "determinate purpose" of God governs all events. ... /p/predestination.htm - 26k Brass (168 Occurrences) ... Dan. 2:39). The "mountains of brass" Zechariah (6:1) speaks of have been supposed to represent the immutable decrees of God. The ... /b/brass.htm - 44k Resources What is the immutability of God? | GotQuestions.org What is finite godism? | GotQuestions.org Does God have emotions? | GotQuestions.org Immutable: Dictionary and Thesaurus | Clyx.com Bible Concordance • Bible Dictionary • Bible Encyclopedia • Topical Bible • Bible Thesuarus Demonstrations by Syllogisms. That God the Word is Immutable. ... That God the Word is Immutable. 1. We have confessed one substance of the Father, of the Son, and of the Holy Ghost, and have agreed that it is immutable. ... /.../demonstrations by syllogisms that god.htm Love, Goodness, and Communication of Good, is the Immutable Glory ... ... Address 191: Love, goodness, and communication of good, is the immutable glory and perfection of the divine? Love, goodness, and ... /.../address 191 0 0 love goodness and.htm He Then Declares that the Close Relation Between Names and Things ... ... Section 2. He then declares that the close relation between names and things is immutable, and thereafter proceeds accordingly, in the most excellent manner ... /.../gregory/gregory of nyssa dogmatic treatises etc/section 2 he then declares.htm Thesaurus Immutable (1 Occurrence) ... (a.) Not mutable; unchangeable; unalterable. Int. Standard Bible Encyclopedia. IMMUTABILITY; IMMUTABLE. ... Multi-Version Concordance Immutable (1 Occurrence). ... /i/immutable.htm - 7k Immutability (1 Occurrence) ... Noah Webster's Dictionary (n.) The state or quality of being immutable; immutableness. Int. Standard Bible Encyclopedia. IMMUTABILITY; IMMUTABLE. ... /i/immutability.htm - 7k Unchangeableness (1 Occurrence) ... is unchangeable in His love and grace and power to save, but that is only because it is the love and grace and power of the absolute, infinite and immutable God ... /u/unchangeableness.htm - 27k Unchangeable (4 Occurrences) ... is unchangeable in His love and grace and power to save, but that is only because it is the love and grace and power of the absolute, infinite and immutable God ... /u/unchangeable.htm - 28k Imna (1 Occurrence) /i/imna.htm - 6k Fate (138 Occurrences) ... 1. (n.) A fixed decree by which the order of things is prescribed; the immutable law of the universe; inevitable necessity; the force by which all existence is ... /f/fate.htm - 37k Fled (181 Occurrences) ... Hebrews 6:18 that by two immutable things, in which it is impossible for God to lie, we may have a strong encouragement, who have fled for refuge to take hold ... /f/fled.htm - 38k Eternal (166 Occurrences) ... 3. (a.) Continued without intermission; perpetual; ceaseless; constant. 4. (a.) Existing at all times without change; immutable. ... /e/eternal.htm - 46k Predestination ... They teach that the eternal, sovereign, immutable, and unconditional decree or "determinate purpose" of God governs all events. ... /p/predestination.htm - 26k Brass (168 Occurrences) ... Dan. 2:39). The "mountains of brass" Zechariah (6:1) speaks of have been supposed to represent the immutable decrees of God. The ... /b/brass.htm - 44k Resources What is the immutability of God? | GotQuestions.org What is finite godism? | GotQuestions.org Does God have emotions? | GotQuestions.org Immutable: Dictionary and Thesaurus | Clyx.com Bible Concordance • Bible Dictionary • Bible Encyclopedia • Topical Bible • Bible Thesuarus Love, Goodness, and Communication of Good, is the Immutable Glory ... ... Address 191: Love, goodness, and communication of good, is the immutable glory and perfection of the divine? Love, goodness, and ... /.../address 191 0 0 love goodness and.htm He Then Declares that the Close Relation Between Names and Things ... ... Section 2. He then declares that the close relation between names and things is immutable, and thereafter proceeds accordingly, in the most excellent manner ... /.../gregory/gregory of nyssa dogmatic treatises etc/section 2 he then declares.htm Thesaurus Immutable (1 Occurrence) ... (a.) Not mutable; unchangeable; unalterable. Int. Standard Bible Encyclopedia. IMMUTABILITY; IMMUTABLE. ... Multi-Version Concordance Immutable (1 Occurrence). ... /i/immutable.htm - 7k Immutability (1 Occurrence) ... Noah Webster's Dictionary (n.) The state or quality of being immutable; immutableness. Int. Standard Bible Encyclopedia. IMMUTABILITY; IMMUTABLE. ... /i/immutability.htm - 7k Unchangeableness (1 Occurrence) ... is unchangeable in His love and grace and power to save, but that is only because it is the love and grace and power of the absolute, infinite and immutable God ... /u/unchangeableness.htm - 27k Unchangeable (4 Occurrences) ... is unchangeable in His love and grace and power to save, but that is only because it is the love and grace and power of the absolute, infinite and immutable God ... /u/unchangeable.htm - 28k Imna (1 Occurrence) /i/imna.htm - 6k Fate (138 Occurrences) ... 1. (n.) A fixed decree by which the order of things is prescribed; the immutable law of the universe; inevitable necessity; the force by which all existence is ... /f/fate.htm - 37k Fled (181 Occurrences) ... Hebrews 6:18 that by two immutable things, in which it is impossible for God to lie, we may have a strong encouragement, who have fled for refuge to take hold ... /f/fled.htm - 38k Eternal (166 Occurrences) ... 3. (a.) Continued without intermission; perpetual; ceaseless; constant. 4. (a.) Existing at all times without change; immutable. ... /e/eternal.htm - 46k Predestination ... They teach that the eternal, sovereign, immutable, and unconditional decree or "determinate purpose" of God governs all events. ... /p/predestination.htm - 26k Brass (168 Occurrences) ... Dan. 2:39). The "mountains of brass" Zechariah (6:1) speaks of have been supposed to represent the immutable decrees of God. The ... /b/brass.htm - 44k Resources What is the immutability of God? | GotQuestions.org What is finite godism? | GotQuestions.org Does God have emotions? | GotQuestions.org Immutable: Dictionary and Thesaurus | Clyx.com Bible Concordance • Bible Dictionary • Bible Encyclopedia • Topical Bible • Bible Thesuarus He Then Declares that the Close Relation Between Names and Things ... ... Section 2. He then declares that the close relation between names and things is immutable, and thereafter proceeds accordingly, in the most excellent manner ... /.../gregory/gregory of nyssa dogmatic treatises etc/section 2 he then declares.htm Thesaurus Immutable (1 Occurrence) ... (a.) Not mutable; unchangeable; unalterable. Int. Standard Bible Encyclopedia. IMMUTABILITY; IMMUTABLE. ... Multi-Version Concordance Immutable (1 Occurrence). ... /i/immutable.htm - 7k Immutability (1 Occurrence) ... Noah Webster's Dictionary (n.) The state or quality of being immutable; immutableness. Int. Standard Bible Encyclopedia. IMMUTABILITY; IMMUTABLE. ... /i/immutability.htm - 7k Unchangeableness (1 Occurrence) ... is unchangeable in His love and grace and power to save, but that is only because it is the love and grace and power of the absolute, infinite and immutable God ... /u/unchangeableness.htm - 27k Unchangeable (4 Occurrences) ... is unchangeable in His love and grace and power to save, but that is only because it is the love and grace and power of the absolute, infinite and immutable God ... /u/unchangeable.htm - 28k Imna (1 Occurrence) /i/imna.htm - 6k Fate (138 Occurrences) ... 1. (n.) A fixed decree by which the order of things is prescribed; the immutable law of the universe; inevitable necessity; the force by which all existence is ... /f/fate.htm - 37k Fled (181 Occurrences) ... Hebrews 6:18 that by two immutable things, in which it is impossible for God to lie, we may have a strong encouragement, who have fled for refuge to take hold ... /f/fled.htm - 38k Eternal (166 Occurrences) ... 3. (a.) Continued without intermission; perpetual; ceaseless; constant. 4. (a.) Existing at all times without change; immutable. ... /e/eternal.htm - 46k Predestination ... They teach that the eternal, sovereign, immutable, and unconditional decree or "determinate purpose" of God governs all events. ... /p/predestination.htm - 26k Brass (168 Occurrences) ... Dan. 2:39). The "mountains of brass" Zechariah (6:1) speaks of have been supposed to represent the immutable decrees of God. The ... /b/brass.htm - 44k Resources What is the immutability of God? | GotQuestions.org What is finite godism? | GotQuestions.org Does God have emotions? | GotQuestions.org Immutable: Dictionary and Thesaurus | Clyx.com Bible Concordance • Bible Dictionary • Bible Encyclopedia • Topical Bible • Bible Thesuarus Immutability (1 Occurrence) ... Noah Webster's Dictionary (n.) The state or quality of being immutable; immutableness. Int. Standard Bible Encyclopedia. IMMUTABILITY; IMMUTABLE. ... /i/immutability.htm - 7k Unchangeableness (1 Occurrence) ... is unchangeable in His love and grace and power to save, but that is only because it is the love and grace and power of the absolute, infinite and immutable God ... /u/unchangeableness.htm - 27k Unchangeable (4 Occurrences) ... is unchangeable in His love and grace and power to save, but that is only because it is the love and grace and power of the absolute, infinite and immutable God ... /u/unchangeable.htm - 28k Imna (1 Occurrence) /i/imna.htm - 6k Fate (138 Occurrences) ... 1. (n.) A fixed decree by which the order of things is prescribed; the immutable law of the universe; inevitable necessity; the force by which all existence is ... /f/fate.htm - 37k Fled (181 Occurrences) ... Hebrews 6:18 that by two immutable things, in which it is impossible for God to lie, we may have a strong encouragement, who have fled for refuge to take hold ... /f/fled.htm - 38k Eternal (166 Occurrences) ... 3. (a.) Continued without intermission; perpetual; ceaseless; constant. 4. (a.) Existing at all times without change; immutable. ... /e/eternal.htm - 46k Predestination ... They teach that the eternal, sovereign, immutable, and unconditional decree or "determinate purpose" of God governs all events. ... /p/predestination.htm - 26k Brass (168 Occurrences) ... Dan. 2:39). The "mountains of brass" Zechariah (6:1) speaks of have been supposed to represent the immutable decrees of God. The ... /b/brass.htm - 44k Resources What is the immutability of God? | GotQuestions.org What is finite godism? | GotQuestions.org Does God have emotions? | GotQuestions.org Immutable: Dictionary and Thesaurus | Clyx.com Bible Concordance • Bible Dictionary • Bible Encyclopedia • Topical Bible • Bible Thesuarus Unchangeableness (1 Occurrence) ... is unchangeable in His love and grace and power to save, but that is only because it is the love and grace and power of the absolute, infinite and immutable God ... /u/unchangeableness.htm - 27k Unchangeable (4 Occurrences) ... is unchangeable in His love and grace and power to save, but that is only because it is the love and grace and power of the absolute, infinite and immutable God ... /u/unchangeable.htm - 28k Imna (1 Occurrence) /i/imna.htm - 6k Fate (138 Occurrences) ... 1. (n.) A fixed decree by which the order of things is prescribed; the immutable law of the universe; inevitable necessity; the force by which all existence is ... /f/fate.htm - 37k Fled (181 Occurrences) ... Hebrews 6:18 that by two immutable things, in which it is impossible for God to lie, we may have a strong encouragement, who have fled for refuge to take hold ... /f/fled.htm - 38k Eternal (166 Occurrences) ... 3. (a.) Continued without intermission; perpetual; ceaseless; constant. 4. (a.) Existing at all times without change; immutable. ... /e/eternal.htm - 46k Predestination ... They teach that the eternal, sovereign, immutable, and unconditional decree or "determinate purpose" of God governs all events. ... /p/predestination.htm - 26k Brass (168 Occurrences) ... Dan. 2:39). The "mountains of brass" Zechariah (6:1) speaks of have been supposed to represent the immutable decrees of God. The ... /b/brass.htm - 44k Resources What is the immutability of God? | GotQuestions.org What is finite godism? | GotQuestions.org Does God have emotions? | GotQuestions.org Immutable: Dictionary and Thesaurus | Clyx.com Bible Concordance • Bible Dictionary • Bible Encyclopedia • Topical Bible • Bible Thesuarus Unchangeable (4 Occurrences) ... is unchangeable in His love and grace and power to save, but that is only because it is the love and grace and power of the absolute, infinite and immutable God ... /u/unchangeable.htm - 28k Imna (1 Occurrence) /i/imna.htm - 6k Fate (138 Occurrences) ... 1. (n.) A fixed decree by which the order of things is prescribed; the immutable law of the universe; inevitable necessity; the force by which all existence is ... /f/fate.htm - 37k Fled (181 Occurrences) ... Hebrews 6:18 that by two immutable things, in which it is impossible for God to lie, we may have a strong encouragement, who have fled for refuge to take hold ... /f/fled.htm - 38k Eternal (166 Occurrences) ... 3. (a.) Continued without intermission; perpetual; ceaseless; constant. 4. (a.) Existing at all times without change; immutable. ... /e/eternal.htm - 46k Predestination ... They teach that the eternal, sovereign, immutable, and unconditional decree or "determinate purpose" of God governs all events. ... /p/predestination.htm - 26k Brass (168 Occurrences) ... Dan. 2:39). The "mountains of brass" Zechariah (6:1) speaks of have been supposed to represent the immutable decrees of God. The ... /b/brass.htm - 44k Resources What is the immutability of God? | GotQuestions.org What is finite godism? | GotQuestions.org Does God have emotions? | GotQuestions.org Immutable: Dictionary and Thesaurus | Clyx.com Bible Concordance • Bible Dictionary • Bible Encyclopedia • Topical Bible • Bible Thesuarus Imna (1 Occurrence) /i/imna.htm - 6k Fate (138 Occurrences) ... 1. (n.) A fixed decree by which the order of things is prescribed; the immutable law of the universe; inevitable necessity; the force by which all existence is ... /f/fate.htm - 37k Fled (181 Occurrences) ... Hebrews 6:18 that by two immutable things, in which it is impossible for God to lie, we may have a strong encouragement, who have fled for refuge to take hold ... /f/fled.htm - 38k Eternal (166 Occurrences) ... 3. (a.) Continued without intermission; perpetual; ceaseless; constant. 4. (a.) Existing at all times without change; immutable. ... /e/eternal.htm - 46k Predestination ... They teach that the eternal, sovereign, immutable, and unconditional decree or "determinate purpose" of God governs all events. ... /p/predestination.htm - 26k Brass (168 Occurrences) ... Dan. 2:39). The "mountains of brass" Zechariah (6:1) speaks of have been supposed to represent the immutable decrees of God. The ... /b/brass.htm - 44k Resources What is the immutability of God? | GotQuestions.org What is finite godism? | GotQuestions.org Does God have emotions? | GotQuestions.org Immutable: Dictionary and Thesaurus | Clyx.com Bible Concordance • Bible Dictionary • Bible Encyclopedia • Topical Bible • Bible Thesuarus Fate (138 Occurrences) ... 1. (n.) A fixed decree by which the order of things is prescribed; the immutable law of the universe; inevitable necessity; the force by which all existence is ... /f/fate.htm - 37k Fled (181 Occurrences) ... Hebrews 6:18 that by two immutable things, in which it is impossible for God to lie, we may have a strong encouragement, who have fled for refuge to take hold ... /f/fled.htm - 38k Eternal (166 Occurrences) ... 3. (a.) Continued without intermission; perpetual; ceaseless; constant. 4. (a.) Existing at all times without change; immutable. ... /e/eternal.htm - 46k Predestination ... They teach that the eternal, sovereign, immutable, and unconditional decree or "determinate purpose" of God governs all events. ... /p/predestination.htm - 26k Brass (168 Occurrences) ... Dan. 2:39). The "mountains of brass" Zechariah (6:1) speaks of have been supposed to represent the immutable decrees of God. The ... /b/brass.htm - 44k Resources What is the immutability of God? | GotQuestions.org What is finite godism? | GotQuestions.org Does God have emotions? | GotQuestions.org Immutable: Dictionary and Thesaurus | Clyx.com Bible Concordance • Bible Dictionary • Bible Encyclopedia • Topical Bible • Bible Thesuarus Fled (181 Occurrences) ... Hebrews 6:18 that by two immutable things, in which it is impossible for God to lie, we may have a strong encouragement, who have fled for refuge to take hold ... /f/fled.htm - 38k Eternal (166 Occurrences) ... 3. (a.) Continued without intermission; perpetual; ceaseless; constant. 4. (a.) Existing at all times without change; immutable. ... /e/eternal.htm - 46k Predestination ... They teach that the eternal, sovereign, immutable, and unconditional decree or "determinate purpose" of God governs all events. ... /p/predestination.htm - 26k Brass (168 Occurrences) ... Dan. 2:39). The "mountains of brass" Zechariah (6:1) speaks of have been supposed to represent the immutable decrees of God. The ... /b/brass.htm - 44k Resources What is the immutability of God? | GotQuestions.org What is finite godism? | GotQuestions.org Does God have emotions? | GotQuestions.org Immutable: Dictionary and Thesaurus | Clyx.com Bible Concordance • Bible Dictionary • Bible Encyclopedia • Topical Bible • Bible Thesuarus Eternal (166 Occurrences) ... 3. (a.) Continued without intermission; perpetual; ceaseless; constant. 4. (a.) Existing at all times without change; immutable. ... /e/eternal.htm - 46k Predestination ... They teach that the eternal, sovereign, immutable, and unconditional decree or "determinate purpose" of God governs all events. ... /p/predestination.htm - 26k Brass (168 Occurrences) ... Dan. 2:39). The "mountains of brass" Zechariah (6:1) speaks of have been supposed to represent the immutable decrees of God. The ... /b/brass.htm - 44k Resources What is the immutability of God? | GotQuestions.org What is finite godism? | GotQuestions.org Does God have emotions? | GotQuestions.org Immutable: Dictionary and Thesaurus | Clyx.com Bible Concordance • Bible Dictionary • Bible Encyclopedia • Topical Bible • Bible Thesuarus Predestination ... They teach that the eternal, sovereign, immutable, and unconditional decree or "determinate purpose" of God governs all events. ... /p/predestination.htm - 26k Brass (168 Occurrences) ... Dan. 2:39). The "mountains of brass" Zechariah (6:1) speaks of have been supposed to represent the immutable decrees of God. The ... /b/brass.htm - 44k Resources What is the immutability of God? | GotQuestions.org What is finite godism? | GotQuestions.org Does God have emotions? | GotQuestions.org Immutable: Dictionary and Thesaurus | Clyx.com Bible Concordance • Bible Dictionary • Bible Encyclopedia • Topical Bible • Bible Thesuarus Brass (168 Occurrences) ... Dan. 2:39). The "mountains of brass" Zechariah (6:1) speaks of have been supposed to represent the immutable decrees of God. The ... /b/brass.htm - 44k Resources What is the immutability of God? | GotQuestions.org What is finite godism? | GotQuestions.org Does God have emotions? | GotQuestions.org Immutable: Dictionary and Thesaurus | Clyx.com Bible Concordance • Bible Dictionary • Bible Encyclopedia • Topical Bible • Bible Thesuarus Resources What is the immutability of God? | GotQuestions.org What is finite godism? | GotQuestions.org Does God have emotions? | GotQuestions.org Immutable: Dictionary and Thesaurus | Clyx.com Bible Concordance • Bible Dictionary • Bible Encyclopedia • Topical Bible • Bible Thesuarus Hebrews 6:18 that by two immutable things, in which it is impossible for God to lie, we may have a strong encouragement, who have fled for refuge to take hold of the hope set before us. (WEB KJV ASV WBS YLT) Subtopics Immutable Related Terms Zophah (2 Occurrences) Jimnites (1 Occurrence) Imnah (4 Occurrences) Immutable (1 Occurrence) Amal (1 Occurrence) Shelesh (1 Occurrence) Helem (2 Occurrences) Hotham (3 Occurrences) Immutable Zophah (2 Occurrences) Jimnites (1 Occurrence) Imnah (4 Occurrences) Immutable (1 Occurrence) Amal (1 Occurrence) Shelesh (1 Occurrence) Helem (2 Occurrences) Hotham (3 Occurrences) | | | |
189386
https://chemguide.net/wp-content/uploads/2017/10/SP06-Bonding-Theories-Presentation.pdf
BONDING THEORIES SCH4U1 SP06 Lewis Theory of Bonding (1916) Key Points: ▪The noble gas electron configurations are most stable. ▪Stable octets can be formed through the transfer of electrons from metals to non-metals. ▪ Stable octets can also form through sharing of electrons between non-metals (covalent bonding). ▪ Electrons are most stable when they are paired. Electron Dot Diagrams & Lewis Structures Free Radicals • Atoms or molecules with unpaired electrons. • These are very reactive substances. • e.g. reactive hydroxyl radical (OH) vs. stable hydroxide ion (OH-) Resonance • When 2 possible Lewis structures are possible, a hybrid or “resonance” structure is assumed. • Electrons are assumed to be “delocalized” • e.g. nitrite ion Practice: Draw the Electron Dot and Lewis Structure for these covalently bonded elements, compounds or ions: a) F2 b) NF3 c) N2F2 d) N2 e) PCl5 f) CN-g) NH4 + h) OCl j) SO3 2-i) BrO2 -k) O3 Write your answers on the board. Valence Bond (VB) Theory (1928) ▪VB Theory is a quantum mechanical model of bonding. ▪Covalent bonds form when a pair of half-filled orbitals overlap to form combined (or bonding) orbitals. ▪Bonding orbitals contain 2 electrons with opposite spin. ▪Electron density is highest between the 2 nuclei. ▪Direct overlap of orbitals is called a sigma (σ) bond VB Theory (continued) ▪Overlapping orbitals can also form between s and p orbitals (e.g. HF) ▪Combined orbital (sigma bond) represents a lower energy state of the two atoms. Molecular Orbital (MO) Theory (1933) ▪Lewis Theory considers all 4 electrons around carbon to be identical. ▪Contradicted by the Wave-Mechanical Model (1s22s22p2) ▪Experimental evidence confirmed the Lewis model of carbon bonding in compounds (e.g. CH4)! ▪Carbon does contain 4 identical covalent bonds !?! ▪[Complete the Orbital Representation Table Now] Molecular Orbital Theory • States that atomic orbitals can combine to form molecular orbitals (MO) • MO are combinations of Schrodinger’s equations containing multiple nuclei. • Formation of a MO involves electron promotion & orbital hybridization. MO Formation in Carbon 1) A 2s electron is “promoted” into the empty p orbitals. 2) The 2s12p3 atomic orbitals undergo hybridization to form 4 half-filled sp3 bonding orbitals. 3) Each identical sp3 orbital can form a sigma bond with another half-filled orbital. sp3 Hybridization and Shape ▪Electron repulsion moves the 4 bonding orbitals as far apart as possible, forming the tetrahedral shape. Need another explanation? 1) Review the extra readings online 2) Watch these clips: Hybridization Molecular Shape and Orbital Hybridization Hybrid Orbitals s + p s + 2p sp hybrid sp2 hybrid Linear molecule Trigonal planar Hybrid Orbitals s + 3p sp3 Tetrahedral Other hybrids… • Exceptions to the “octet rule” involve unusual hybrids s + 3p + d sp3d hybridization • 5 identical bonding orbitals • Trigonal bipyramidal shape (e.g. PCl5) • s + 3p + 2d sp3d2 hybridization • 6 identical bonding orbitals • Octahedral shape (e.g. SF6) THINKING EXERCISE Explain the weird valences of the following central atoms: • Br in BrF5 • S in SO4 2-• N in NO3 -• Xe in XeF4
189387
https://www.sciencedirect.com/science/article/abs/pii/S017616170600085X
Fermentation metabolism in roots of wheat seedlings after hypoxic pre-treatment in different anoxic incubation systems - ScienceDirect Skip to main contentSkip to article Journals & Books Access throughyour organization Purchase PDF Search ScienceDirect Article preview Abstract Introduction Section snippets References (45) Cited by (18) Journal of Plant Physiology Volume 164, Issue 4, 5 April 2007, Pages 394-407 Fermentation metabolism in roots of wheat seedlings after hypoxic pre-treatment in different anoxic incubation systems Author links open overlay panel Angelika Mustroph a, Gerd Albrecht b Show more Add to Mendeley Share Cite rights and content Summary A hypoxic pre-treatment (HPT) can improve the anoxic survival of flooding sensitive plants. Here, we tested whether a 4-d HPT of wheat plants (Triticum aestivum L.) would improve their anoxic resistance, and if so, why. We found that the metabolic adjustment during prolonged HPT involved an increased lactate excretion rate, the up-regulation of glycolytic and fermentative enzymes as well as the accumulation of various sugars. Therefore, HPT wheat roots could sustain a 3 times higher ethanolic fermentation rate during an anoxic period compared to non-pre-treated (NHPT) roots. Nevertheless, the enhanced fermentation rate provided temporary relief to the energy crisis only, and both NHPT and HPT plants died after 5 d of anoxia in illumination. Comparison of different low oxygen incubation systems using excised roots or roots of intact plants revealed striking differences. The benefits of intact shoots, oxygen transport as well as additional sugar supply enabled a more stable energy supply of anoxia-treated NHPT and HPT roots. However, the height of the fermentation rate was correlated with a high ATP content during dark anoxic incubation, but not in illumination. Introduction Plant roots are frequently challenged by oxygen limitation through soil flooding. Consequently, inhibition of mitochondrial respiration leads to energy deficiency and retarded metabolic activity. Responses of plants to a sudden shift from aerobic to anoxic conditions have been thoroughly described (summarized in Kennedy et al., 1992; Drew, 1997; Sousa and Sodek, 2002; Gibbs and Greenway, 2003). However, under natural conditions, oxygen concentration would decrease gradually, and hence, anoxia is always preceded by hypoxia (Setter and Waters, 2003). It is for this reason that some authors used a hypoxic pre-treatment (HPT) before exposing plants to anoxia, which increased the anoxia tolerance of wheat (Waters et al., 1991), maize (Saglio et al., 1988; Johnson et al., 1989), rice (Ellis and Setter, 1999), oat (Kato-Noguchi, 2002), lettuce (Kato-Noguchi, 2000), and tomato (Germain et al., 1997). One of the best-known mechanisms contributing to the higher anoxia tolerance of HPT plants is the induction of glycolytic and fermentative enzymes during a low oxygen treatment, for example aldolase and enolase (Bouny and Saglio, 1996; Germain et al., 1997), alcohol dehydrogenase (ADH) and pyruvate decarboxylase (PDC) (Johnson et al., 1989; Albrecht et al., 2004). This induction could improve or at least sustain the glycolytic rate in anoxic plants. Nevertheless, the enhancement of the activity of fermentative enzymes alone does not lead to a higher survival of the plants, since PDC or ADH over-expressing plants showed a lower tolerance to anoxia than did wildtype plants (Tadege et al., 1998; Ellis et al., 2000; Rahman et al., 2001). Thus, additional factors may play a role for the tolerance of HPT plants against anoxia, for example the sugar availability. During low oxygen stress, shoot-to-root sugar transport is reduced (Dongen et al., 2003), and photosynthesis is inhibited (Liao and Lin, 1996; Ahmed et al., 2001). At the same time, sugar utilization via glycolysis could be enhanced, as ethanolic fermentation is energetically less effective than mitochondrial respiration (Summers et al., 2000). These factors are responsible for the strong substrate deficiency in anoxic root cells (Guglielminetti et al., 1995; Perata et al., 1996; Mustroph and Albrecht, 2003), and may lead to inhibition of the fermentation rate. In contrast to anoxically treated wheat roots, carbohydrates accumulate during hypoxic treatment (Setter et al., 1987; Waters et al., 1991; Albrecht et al., 1993, Albrecht et al., 2004; Mustroph and Albrecht, 2003). The accumulation of sugars has been attributed to the fact that growth is inhibited in hypoxically treated roots, while photosynthetic reactions are still active in the less challenged leaves (Mustroph and Albrecht, 2003). This carbohydrate accumulation might support fermentation of HPT roots over a long period of anoxic stress, and could enhance tolerance against oxygen deficiency. The aim of this study was to investigate whether a strong HPT of roots enhances the survival of wheat seedlings during anoxia, and which mechanisms are responsible for it. In contrast to earlier publications concerning HPT, a long (4 d) period of pre-treatment was used to ensure fully adapted roots. During these studies, we focused on the interplay between three main factors, the rates of ethanolic and lactic acid fermentation and the sugar availability. Furthermore, a primary objective was to compare different anoxic incubation systems that are used in the literature. Most of the research reported in the available literature has been carried out with excised roots or root tips (e.g. Saglio et al., 1988; Johnson et al., 1989). Under natural conditions, shoots play a considerable role in plant survival under oxygen deficiency stress (Menegus et al., 1991; Schlueter and Crawford, 2001). The shoots are able to transport oxygen as well as carbohydrates into the roots to improve their resistance to anoxia. Therefore, in addition to experiments with excised roots, we also investigated the effect of shoots on the fermentation activity of the roots by using intact wheat plants in differing incubation systems. Access through your organization Check access to the full text by signing in through your organization. Access through your organization Section snippets Plant growth and stress treatment Caryopses of wheat (Triticum aestivum L. cv. Alcedo) were germinated in the dark on moist filter paper. After 2 d they were transferred to 5 L containers containing aerated hydroponic KNOP nutrient solution (Albrecht et al., 1993). Plants were cultivated in a growth chamber under a 16-h photoperiod with a photon flux density of 310 μmol s−1 m−2 with 22/17°C. After 5 d of aerobic growth, the roots of the control plants [no hypoxic pre-treatment (NHPT)] were further aerated, while those of the HPT Effects of HPT on plant metabolism As expected, 4 d of hypoxic treatment of the roots resulted in markedly higher activities of fermentative and glycolytic enzymes compared to the aerated control. The PDC and ADH activities increased 14.5- and 10.6-fold, respectively (Fig. 2), while LDH activity increased by only 60%. For glycolytic enzymes we found that HPT plants had a 3.4-fold higher aldolase activity than did NHPT plants. The activity of enolase was 2.2 times higher, while PFK activity did not change significantly (Fig. 2). HPT induces lactic acid excretion According to the widely accepted Davies–Roberts hypothesis, a primary response to oxygen limitation by plants is the start of lactate fermentation, which lowers the cytoplasmic pH (Roberts et al., 1984a, Roberts et al., 1984b). This pH decrease inhibits LDH and activates PDC, which results in ethanolic fermentation during the later anoxic period (Rivoal and Hanson, 1993). In our study, we confirmed rapid lactate production in response to the onset phase of anoxia in wheat roots, which was Conclusions With our experiments, we were able to demonstrate that wheat plants showed longer anoxic survival after HPT than without pre-treatment. This observation was likely a product of two mechanisms. First, HPT increased the lactate excretion, and therefore prevented cytoplasmatic acidosis. Second, the ethanolic fermentation rate was greatly enhanced by the HPT, which resulted in a higher ATP production rate. This difference was due to induced glycolytic and fermentative enzyme activities as well as Acknowledgments The authors thank Elena Iulia Boamfa for skillful technical assistance during the photoacoustic measurements, which were supported by the “European Union Access to Research Infrastructures action of the Improving Human Potential Program”. Yvonne Poers is acknowledged for the measurement of ATP content. Recommended articles References (45) M.H. Ellis et al. Hypoxia induces anoxia tolerance in completely submerged rice seedlings J Plant Physiol (1999) C.T. Liao et al. Photosynthetic response of grafted bitter melon seedlings to flood stress Environ Exp Bot (1996) M. Tadege et al. Ethanolic fermentation: new functions for an old pathway Trends Plant Sci (1999) S. Ahmed et al. Alterations in photosynthesis and some antioxidant enzymatic activities of mungbean subjected to waterlogging Plant Sci (2001) G. Albrecht et al. Fructan content of wheat seedlings (Triticum aestivum) under hypoxia and following re-aeration New Phytol (1993) G. Albrecht et al. Sugar and fructan accumulation during metabolic adjustment between respiration and fermentation under low oxygen conditions in wheat roots Physiol Plant (2004) W. Armstrong Aeration in higher plants Adv Bot Res (1979) A. Bertani et al. Effect of anaerobiosis on rice seedlings: growth, metabolic rate, and rate of fermentation products J Exp Bot (1980) E.I. Boamfa et al. Dynamic aspects of alcoholic fermentation of rice seedlings in response to anaerobiosis and to complete submergence: relationship to submergence tolerance Ann Bot (2003) J.M. Bouny et al. Glycolytic flux and hexokinase activities in anoxic maize root tips acclimated by hypoxic pre-treatment Plant Physiol (1996) J.T. Dongen et al. Phloem metabolism and function have to cope with low internal oxygen Plant Physiol (2003) M.C. Drew Oxygen deficiency and root metabolism: injury and acclimation under hypoxia and anoxia Ann Rev Plant Physiol Plant Mol Biol (1997) M.H. Ellis et al. Transgenic cotton (Gossypium hirsutum) over-expressing alcohol dehydrogenase shows increased ethanol fermentation but no increase in tolerance to oxygen deficiency Austr J Plant Physiol (2000) V. Germain et al. The role of sugars, hexokinase and sucrose synthase in the determination of hypoxically induced tolerance to anoxia in tomato roots Plant Physiol (1997) J. Gibbs et al. Mechanisms of anoxia tolerance in plants. I. Growth, survival and anaerobic catabolism Funct Plant Biol (2003) A.G. Good et al. Long-term anaerobic metabolism in root tissue: metabolic products of pyruvate metabolism Plant Physiol (1993) E. Gout et al. Origin of the cytoplasmic pH changes during anaerobic stress in higher plant cells. Carbon-13 and phophorous-31 nuclear magnetic resonance studies Plant Physiol (2001) L. Guglielminetti et al. Effect of anoxia on carbohydrate metabolism in rice seedlings Plant Physiol (1995) M. Jacobs et al. Isolation and biochemical analysis of ethyl methanesulfonate-induced alcohol dehydrogenase null mutants of Arabidopsis thaliana (L.) Heynh Biochem Genet (1988) J.R. Johnson et al. Hypoxic induction of anoxia tolerance in root tips of Zea mays Plant Physiol (1989) H. Kato-Noguchi Abscisic acid and hypoxic induction of anoxia tolerance in roots of lettuce seedlings J Exp Bot (2000) H. Kato-Noguchi Hypoxic acclimation to anoxia in Avena roots Plant Growth Regul (2002) View more references Cited by (18) Morpholoical and enzymatic responses to waterlogging in three Prunus species 2017, Scientia Horticulturae Citation Excerpt : Waterlogging (WL) is a severe threat to crop production worldwide (Wei et al., 2013; Y. Zhang et al., 2016), caused by multiple factors including improper irrigation and global warming (Bansal and Srivastava, 2015). Waterlogging subjects plant roots to an anoxic environment (Wang et al., 2002), limiting mitochondrial aerobic respiration and causing energy loss (Mustroph and Albrecht, 2007; Setter and Waters, 2003); this restricts plant growth and survival (Gibberd and Cocks, 1997; Gibberd et al., 2001). In response to hypoxia stress, plants can temporarily compensate with anaerobic respiration. Show abstract Anaerobic respiration is an important mechanism for plants to address energy deficiency under waterlogged (WL) conditions, when aerobic respiration is limited. Seedlings of three Prunus species—Prunus mira Koehne, Prunus persica (L.) Batsch, and Prunus amygdalus (L.)—were irrigated at 60% water (control) or waterlogged (100%) daily for 27 days, to investigate root morphological parameters and dynamics of anaerobic respiration enzymes. Both P. mira and P. persica had significantly increased the leaf number and plant height than P. amygdalus under WL. Additionally, WL decreased leaf chlorophyll content in P. mira and P. amygdalus significantly more than in P. persica. Root pyruvate carboxylase (PDC), alcohol dehydrogenase (ADH), and lactate dehydrogenase (LDH) activities first increased before decreasing under WL treatments. In P. persica, root PDC activity was significantly higher than in P. mira and P. amygdalus at 9 d and 21 d, whereas ADH activity was higher at 15 d, 21 d, and 27 d. Moreover, throughout the experiment, LDH activity was significantly higher in P. persica than in P. mira and P. amygdalus roots under WL treatments. The activities of all three measured enzymes were positively and significantly correlated. We suggest that, because of interspecific variation in root metabolic response to different environmental conditions, P. mira and P. persica may generate more energy anaerobically through EMP than through lactic-acid metabolism, whereas P. amygdalus was the reverse. We conclude that WL tolerance differed across the three Prunus, with P. persica being the most tolerant, followed by P. mira, and finally P. amygdalus. ### Mechanisms of waterlogging tolerance in wheat - a review of root and shoot physiology 2016, Plant Cell and Environment ### Root transcript profiling of two Rorippa species reveals gene Clusters associated with extreme submergence tolerance 2013, Plant Physiology ### Sugars in crop plants 2011, Annals of Applied Biology ### Tolerance of Sesbania virgata plants to flooding 2009, Australian Journal of Botany ### Regulation of callose synthase activity in situ in alamethicin- permeabilized Arabidopsis and tobacco suspension cells 2009, BMC Plant Biology View all citing articles on Scopus View full text Copyright © 2006 Elsevier GmbH. All rights reserved. Recommended articles Fat-Soluble Vitamins Nursing Clinics of North America, Volume 56, Issue 1, 2021, pp. 33-45 Sherri L.Stevens ### Metabolomic and transcriptomic analyses reveal the reasons why Hordeum marinum has higher salt tolerance than Hordeum vulgare Environmental and Experimental Botany, Volume 156, 2018, pp. 48-61 Lu Huang, …, Guoping Zhang ### Fat-soluble vitamins: the key role players in immunomodulation and digestion Nutrition and Functional Foods in Boosting Digestion, Metabolism and Immune Health, 2022, pp. 329-364 Saptadip Samanta ### Water-soluble vitamins Innovative Thermal and Non-Thermal Processing, Bioaccessibility and Bioavailability of Nutrients and Bioactive Compounds, 2019, pp. 241-266 Amin Mousavi Khaneghah, …, Daniela S.Ferreira ### The bioaccessibility of water-soluble vitamins: A review Trends in Food Science & Technology, Volume 109, 2021, pp. 552-563 Mustafa Yaman, …, Muhammet Cihan Yaldız ### Exploring the genic resources underlying metabolites through mGWAS and mQTL in wheat: From large-scale gene identification and pathway elucidation to crop improvement Plant Communications, Volume 2, Issue 4, 2021, Article 100216 Jie Chen, …, Wei Chen Show 3 more articles About ScienceDirect Remote access Contact and support Terms and conditions Privacy policy Cookies are used by this site.Cookie settings All content on this site: Copyright © 2025 Elsevier B.V., its licensors, and contributors. All rights are reserved, including those for text and data mining, AI training, and similar technologies. For all open access content, the relevant licensing terms apply. We use cookies that are necessary to make our site work. We may also use additional cookies to analyze, improve, and personalize our content and your digital experience. You can manage your cookie preferences using the “Cookie Settings” link. For more information, see ourCookie Policy Cookie Settings Accept all cookies Cookie Preference Center We use cookies which are necessary to make our site work. We may also use additional cookies to analyse, improve and personalise our content and your digital experience. For more information, see our Cookie Policy and the list of Google Ad-Tech Vendors. You may choose not to allow some types of cookies. However, blocking some types may impact your experience of our site and the services we are able to offer. See the different category headings below to find out more or change your settings. You may also be able to exercise your privacy choices as described in our Privacy Policy Allow all Manage Consent Preferences Strictly Necessary Cookies Always active These cookies are necessary for the website to function and cannot be switched off in our systems. They are usually only set in response to actions made by you which amount to a request for services, such as setting your privacy preferences, logging in or filling in forms. You can set your browser to block or alert you about these cookies, but some parts of the site will not then work. Cookie Details List‎ Performance Cookies [x] Performance Cookies These cookies allow us to count visits and traffic sources so we can measure and improve the performance of our site. They help us to know which pages are the most and least popular and see how visitors move around the site. Cookie Details List‎ Contextual Advertising Cookies [x] Contextual Advertising Cookies These cookies are used for properly showing banner advertisements on our site and associated functions such as limiting the number of times ads are shown to each user. Cookie Details List‎ Cookie List Clear [x] checkbox label label Apply Cancel Consent Leg.Interest [x] checkbox label label [x] checkbox label label [x] checkbox label label Confirm my choices × Read strategically, not sequentially ScienceDirect AI extracts key findings from full-text articles, helping you quickly assess the article's relevance to your research. Unlock your access
189388
https://www.youtube.com/watch?v=4W_3ehheXEs
A Diophantine System | Non-negative Integers SyberMath 155000 subscribers 228 likes Description 21122 views Posted: 28 Sep 2023 🤩 Hello everyone, I'm very excited to bring you a new channel (SyberMath Shorts). Enjoy...and thank you for your support!!! 🧡🥰🎉🥳🧡 ⭐ Join this channel to get access to perks:→ My merch → Follow me → Subscribe → ⭐ Suggest → If you need to post a picture of your solution or idea: DiophantineEquations #NumberTheoryProblems #NumberTheory via @YouTube @Apple @Desmos @NotabilityApp @googledocs @canva SIMILAR PROBLEMS An Exponential Equation | 2^{3^{5^x}} = 3^{2^{5^x}}: A Surprising Exponential Equation: PLAYLISTS 🎵 : Number Theory Problems: Challenging Math Problems: Trigonometry Problems: Diophantine Equations and Systems: Calculus: 16 comments Transcript: hello everyone in this video we're going to be solving a diaphanin system we have two equations and we're going to be solving for let's say non-negative integers I just want to include zero because if we don't then we kind of run into some trouble or it's going to be more interesting that way anyways we could also solve this for integers but that would again be too many cases in my opinion I haven't tried it but let's go ahead and solve this system for non-negative integers which makes it a diaphragon equation or diophantine system now what would happen if we were solving for real numbers there are two equations but four variables that basically means we're going to have infinitely many solutions how do you represent those solutions by using a parameter well there must be a way to do it but again that's outside the scope of this video anyway so let's go ahead and take a look I have two equations AC plus b d equals 6 and a d plus b is equal to seven so these two equations were carefully made up so that they would not be factorable in and of themselves so when you look at the first equation there's no common factors they're all different and second equation is the same now that I finding equations and systems are fun I made a separate video a lecture video on those you can go ahead and check that out we gotta use some special strategies sometimes modular arithmetic sometimes factoring sometimes looking at different cases or finding some identities so on and so forth finding some restrictions or proving that there are no Solutions or proving that there are infinitely many solutions so all of these are possible now here's what I'm going to do I'm going to go ahead and add these equations and the motivation behind it is each equation when you look at each equation they're not factorable but when you combine them we get something nice let's go ahead and take a look so I'm going to go ahead and rewrite these smaller AC plus BD equals 6 and a d plus BC is equal to seven I'm going to go ahead and add these equations up when I do I want to write these two together AC plus a d and then BC plus BD equals 13. now obviously we have to check we have to make sure that whatever we get from here also works with each equation because this is a result of two different equations and whatever we get from here may not work uh in the general case anyway so factor out a B and then C plus d is a common factor so we can go ahead and I like to write an alphabetical order so let's go ahead and put it as a plus b times C plus d this is the most critical part why because we have the sum of products we had two equations now we got a single equation but guess what it's factored beautiful since a b c d are all non-negative integers a plus b and c plus T are also non-negative integers which means we have to look at factors of 13. but there's something nice about it and I apologize by if you can hear the plane uh I mean those are open it's kind of warm a little bit warm here anyways we're going to look at factors of 13 and 13 is a prime number so that's nice there aren't that many but we kind of end up with four cases a plus b equals 13 C plus d equals one another case would be a plus b equals one and C plus d equals 13. now there's two more cases with the negatives but if you look at them like for example a plus b equals negative one and C plus d is negative thirteen since ABCD are non-negative their sums cannot be negative so these two cases are not going to be considered because they can't work okay so we're going to be looking at these cases let's call this first one and let's call this the second one depending on how much time how much time I have left I'm going to try to go through both cases but at least one okay so let's take a look at the first case and the second case is so similar I guess we could leave it as an exercise for the reader don't hate me for that now we're going to go ahead and take a look at these two equations so how can I handle this right I have the sum so I kind of got from a more complicated equation look at that to a much much simpler one look at this isn't that better much better so we're going to look for two numbers whose sum is 13 but here's the thing do we have to go through all the cases like a equals 0 b equals 13 equals one because no no you don't need to we're going to use an awesome method which I use very frequently and it is called substitution if you set substitution you guessed it right so here's what I'm going to do I'm going to replace a with 13 minus B and save it 1 minus D and then plug these into one of the equations which one doesn't matter I'm going to use the first one because the first one is I don't know I like it better so let's go ahead and replace a with 13 minus B and C with 1 minus t and guess what this is going to give us another equation in two variables but this time it's going to be real cool you'll see in a little bit why 13 minus 13d minus B plus b d plus b d that's going to make 2bd 2bd or not to be that didn't work it should be 2B anyways so we got this it's got an um organize this a little bit I want to write the 2bd first and then follow by minus P minus 13 D and then 13 minus 6 is going to give me 7. but I rather leave the 13 6 minus 13 on the right hand side because I'm supposed to add something to both sides will make it factual and I think it's called Simon's favorite factoring trick ssft I use that a lot in especially with Thai Phantom systems or equations anyways we take out a b this gives us 2D minus one now here we have to be very careful I want to get 2D minus 1 inside the parenthesis what should I multiply it by and the answer is a fraction 13 over 2. but don't worry about it we'll fix it in a little bit so that just means that I added 13 over 2 to both sides so this is going to be negative 7 plus 13 over 2. we have to add the same thing on both sides so we don't change the equation and now we didn't now notice that we can write this as 2D minus 1 multiply by B minus 13 over 2 equals negative 7 times 2 negative 14 that's going to give me negative one-half obviously you would probably 99 would multiply both sides by 2 to get rid of all the fractions and if you do that you're gonna get not 4D I want to multiply this by 2 to get rid of the fraction so it's going to be 2D minus 1 multiply by 2B minus 13 equals negative 1. awesome so we got this equation what are we going to do with it right well we're just going to look at factors so from one equation we got into as two equations we got into a single equation and then that single equation by way of substitution uh we got two equations so on and so on anyways it's a long story so this is what we ended up with so what do you do with that factors of negative 1 is what you need to look at right so how do you do that well it can be 1 and negative one or negative one and one let's go ahead and take a look at each case and second case will probably be yours if 2D minus 1 is equal to one I guess we could do this mentally can't we it's fairly simple if 2D minus 1 is equal to 1 then that means D is equal to one and the 2B minus 13 equals negative 1 means 2 equals 12 and B is equal to 6 2B or not two yeah I can say that right so this implies something if D is one we know that a plus b is 13 we gotta write this down here copy and C plus d is one great so now we do know that if D is 1 then C is 0 and if B is 6 a is seven so this gives us an ordered quadruple which I'm going to write a little later and then from the second case if 2D minus 1 is equal to negative 1 that means D is equal to zero and if 2B minus 13 is equal to 1 that means B is equal to 7. so they kind of switch around from here if D is 0 C is one if b is 7 then a is six so they are interchangeable in that sense so the solutions are going to be then if you write them as ordered quadruples 7 6 0 1 and six seven one zero and this brings us to the end of the zero the other Solutions are yours thank you for watching I hope you enjoyed it please let me know don't forget to comment like And subscribe I'll see you next time with another video until then be safe take care and bye bye
189389
https://artofproblemsolving.com/wiki/index.php/2020_AIME_I_Problems/Problem_4?srsltid=AfmBOorfkHdvLsAN3m_nRfenXD9_9_xrFEXEVeA_vkOyvvv6P1h4lNhu
Art of Problem Solving 2020 AIME I Problems/Problem 4 - AoPS Wiki Art of Problem Solving AoPS Online Math texts, online classes, and more for students in grades 5-12. Visit AoPS Online ‚ Books for Grades 5-12Online Courses Beast Academy Engaging math books and online learning for students ages 6-13. Visit Beast Academy ‚ Books for Ages 6-13Beast Academy Online AoPS Academy Small live classes for advanced math and language arts learners in grades 2-12. Visit AoPS Academy ‚ Find a Physical CampusVisit the Virtual Campus Sign In Register online school Class ScheduleRecommendationsOlympiad CoursesFree Sessions books tore AoPS CurriculumBeast AcademyOnline BooksRecommendationsOther Books & GearAll ProductsGift Certificates community ForumsContestsSearchHelp resources math training & toolsAlcumusVideosFor the Win!MATHCOUNTS TrainerAoPS Practice ContestsAoPS WikiLaTeX TeXeRMIT PRIMES/CrowdMathKeep LearningAll Ten contests on aopsPractice Math ContestsUSABO newsAoPS BlogWebinars view all 0 Sign In Register AoPS Wiki ResourcesAops Wiki 2020 AIME I Problems/Problem 4 Page ArticleDiscussionView sourceHistory Toolbox Recent changesRandom pageHelpWhat links hereSpecial pages Search 2020 AIME I Problems/Problem 4 Contents 1 Problem 2 Solution 1 3 Solution 2 (Official MAA) 4 Solution 3 5 Video Solutions 6 See Also Problem Let be the set of positive integers with the property that the last four digits of are and when the last four digits are removed, the result is a divisor of For example, is in because is a divisor of Find the sum of all the digits of all the numbers in For example, the number contributes to this total. Solution 1 We note that any number in can be expressed as for some integer . The problem requires that divides this number, and since we know divides , we need that divides 2020. Each number contributes the sum of the digits of , as well as . Since can be prime factorized as , it has factors. So if we sum all the digits of all possible values, and add , we obtain the answer. Now we list out all factors of , or all possible values of . . If we add up these digits, we get , for a final answer of . -molocyxu Solution 2 (Official MAA) Suppose that has the required property. Then there are positive integers and such that . Thus , which holds exactly when is a positive divisor of The number has divisors: , and The requested sum is therefore the sum of the digits in these divisors plus times the sum of the digits in which is Solution 3 Note that for all , can be written as for some positive integer . Because must be divisible by , is an integer. We now let , where is a divisor of . Then . We know and are integers, so for to be an integer, must be an integer. For this to happen, must be a divisor of . is prime, so . Because is a divisor of , . So . Be know that all end in , so the sum of the digits of each is the sum of the digits of each plus . Hence the sum of all of the digits of the numbers in is . Video Solutions See Also 2020 AIME I (Problems • Answer Key • Resources) Preceded by Problem 3Followed by Problem 5 1•2•3•4•5•6•7•8•9•10•11•12•13•14•15 All AIME Problems and Solutions These problems are copyrighted © by the Mathematical Association of America, as part of the American Mathematics Competitions. Retrieved from " Art of Problem Solving is an ACS WASC Accredited School aops programs AoPS Online Beast Academy AoPS Academy About About AoPS Our Team Our History Jobs AoPS Blog Site Info Terms Privacy Contact Us follow us Subscribe for news and updates © 2025 AoPS Incorporated © 2025 Art of Problem Solving About Us•Contact Us•Terms•Privacy Copyright © 2025 Art of Problem Solving Something appears to not have loaded correctly. Click to refresh.
189390
https://arxiv.org/abs/1910.13994
We gratefully acknowledge support from the Simons Foundation, member institutions, and all contributors. Donate math > arXiv:1910.13994 Mathematics > Number Theory arXiv:1910.13994 (math) [Submitted on 30 Oct 2019] Title:On Newman and Littlewood polynomials with prescribed number of zeros inside the unit disk Authors:Kevin G. Hare, Jonas Jankauskas View a PDF of the paper titled On Newman and Littlewood polynomials with prescribed number of zeros inside the unit disk, by Kevin G. Hare and Jonas Jankauskas View PDF Abstract:We study ${0, 1}$ and ${-1, 1}$ polynomials $f(z)$, called Newman and Littlewood polynomials, that have a prescribed number $N(f)$ of zeros in the open unit disk $\mathcal{D} = {z \in \mathbb{C}: |z| < 1}$. For every pair $(k, n) \in \mathbb{N}^2$, where $n \geq 7$ and $k \in [3, n-3]$, we prove that it is possible to find a ${0, 1}$--polynomial $f(z)$ of degree $\text{deg }{f}=n$ with non--zero constant term $f(0) \ne 0$, such that $N(f)=k$ and $f(z) \ne 0$ on the unit circle $\partial\mathcal{D}$. On the way to this goal, we answer a question of D.~W.~Boyd from 1986 on the smallest degree Newman polynomial that satisfies $|f(z)| > 2$ on the unit circle $\partial \mathcal{D}$. This polynomial is of degree $38$ and we use this special polynomial in our constructions. We also identify (without a proof) all exceptional $(k, n)$ with $k \in {1, 2, 3, n-3, n-2, n-1}$, for which no such ${0, 1}$--polynomial of degree $n$ exists: such pairs are related to regular (real and complex) Pisot numbers. Similar, but less complete results for ${-1, 1}$ polynomials are established. We also look at the products of spaced Newman polynomials and consider the rotated large Littlewood polynomials. Lastly, based on our data, we formulate a natural conjecture about the statistical distribution of $N(f)$ in the set of Newman and Littlewood polynomials. | | | --- | | Comments: | Submitted for publiation,v1.0, 9 pages, 7 figures, 8 tables | | Subjects: | Number Theory (math.NT) | | MSC classes: | 11R06, 11R09, 11B83, 11Y99, 12D10, 26C10, 30C15, 65H04, 93A99 | | Cite as: | arXiv:1910.13994 [math.NT] | | | (or arXiv:1910.13994v1 [math.NT] for this version) | | | arXiv-issued DOI via DataCite | Submission history From: Jonas Jankauskas [view email] [v1] Wed, 30 Oct 2019 17:24:54 UTC (203 KB) Full-text links: Access Paper: View a PDF of the paper titled On Newman and Littlewood polynomials with prescribed number of zeros inside the unit disk, by Kevin G. Hare and Jonas Jankauskas View PDF TeX Source Other Formats view license Current browse context: math.NT < prev | next > new | recent | 2019-10 Change to browse by: math References & Citations NASA ADS Google Scholar Semantic Scholar 1 blog link (what is this?) export BibTeX citation Loading... BibTeX formatted citation × Data provided by: Bookmark Bibliographic and Citation Tools Bibliographic Explorer (What is the Explorer?) Connected Papers (What is Connected Papers?) Litmaps (What is Litmaps?) scite Smart Citations (What are Smart Citations?) Code, Data and Media Associated with this Article alphaXiv (What is alphaXiv?) CatalyzeX Code Finder for Papers (What is CatalyzeX?) DagsHub (What is DagsHub?) Gotit.pub (What is GotitPub?) Hugging Face (What is Huggingface?) Papers with Code (What is Papers with Code?) ScienceCast (What is ScienceCast?) Demos Replicate (What is Replicate?) Hugging Face Spaces (What is Spaces?) TXYZ.AI (What is TXYZ.AI?) Recommenders and Search Tools Influence Flower (What are Influence Flowers?) CORE Recommender (What is CORE?) Author Venue Institution Topic arXivLabs: experimental projects with community collaborators arXivLabs is a framework that allows collaborators to develop and share new arXiv features directly on our website. Both individuals and organizations that work with arXivLabs have embraced and accepted our values of openness, community, excellence, and user data privacy. arXiv is committed to these values and only works with partners that adhere to them. Have an idea for a project that will add value for arXiv's community? Learn more about arXivLabs. Which authors of this paper are endorsers? | Disable MathJax (What is MathJax?)
189391
https://www.quora.com/How-can-I-find-angle-of-a-triangle-in-which-the-median-and-latitude-drawn-from-the-same-vertex-divide-the-angle-at-that-vertex-into-three-equal-parts
Something went wrong. Wait a moment and try again. Mathematical Problems Median (triangles) Construction of Triangles Mathematics Word Problems Angle Bisector Theorem Geometric Mathematics 5 How can I find angle of a triangle in which the median and latitude drawn from the same vertex divide the angle at that vertex into three equal parts? Carlos Eŭ Th Triple IMO bronze medalist · Author has 5.9K answers and 4.4M answer views · 8y I don’t know what latitude of a triangle is, but I will take it to mean the height. Let ABC be a triangle, AH the height from A to BC, and AM the median (M the mid point between B and C. Without loss of generality, let’s say that H is between B and M. so we have △BAH and △MAH congruent, as angles BHA and MHA are equal (90∘), angles BAH and MAH are equal (per hypothesis of the problem) and they share side AH. Therefore BH=HM (or HM=12BM) So, let’s see triangle AHC. AM is a bisector, therefore we have the ratios MHMC=AHAC. But MH=12BM=12MC, I don’t know what latitude of a triangle is, but I will take it to mean the height. Let ABC be a triangle, AH the height from A to BC, and AM the median (M the mid point between B and C. Without loss of generality, let’s say that H is between B and M. so we have △BAH and △MAH congruent, as angles BHA and MHA are equal (90∘), angles BAH and MAH are equal (per hypothesis of the problem) and they share side AH. Therefore BH=HM (or HM=12BM) So, let’s see triangle AHC. AM is a bisector, therefore we have the ratios MHMC=AHAC. But MH=12BM=12MC, therefore MHMC=12. So AHC is a right triangle with right angle at H, and AC=2AH. So sinC=12andC=30∘. ∠HAC=60∘, and ∠HAB=∠HAM=12∠HAC=30∘, so ∠BAC=90∘. So the angle in the question is a right angle, and the other two angles of the triangle are 30° and 60°. Related questions How can I calculate the inner angles of a triangle where the median and the altitude from the same vertex divide the angle into three equal angles? The median of an isosceles triangle makes two equal angles at the vertex it is drawn from, but how do you find the angles it makes in general? If the altitude, angle bisector and median that are drawn from vertex A of the triangle ABC divides angle A into 4 equal angles, then what is the angle of A? How do I find the vertex angle of a star? Where does the vertex of an angle lie? Dean Rubine Been doing high school math since high school, circa 1975 · Author has 10.6K answers and 23.7M answer views · 8y I borrowed Mr. Pinzón’s figure; thanks. I didn’t otherwise peek at the answers. Since AH is an altitude (assuming latitude is a typo) we know BHA and AHM are right angles. So we have angle, side, angle and thus congruent triangles ABH and AMH. We know BM=CM by the definition of median. So we have 4BH=4HM= 2BM=2CM=BC. Let’s say BC=4 without loss of generality; we’re only after the angles. Let’s call c=AB=AM,a=BC,b=AC, h=AH, BH=HM=1, BM=CM=2,BC=4 Angles HAM, and CAM are given as equal. From the angle bisector theorem, we get HMAH=CMAC1h=2b\qu I borrowed Mr. Pinzón’s figure; thanks. I didn’t otherwise peek at the answers. Since AH is an altitude (assuming latitude is a typo) we know BHA and AHM are right angles. So we have angle, side, angle and thus congruent triangles ABH and AMH. We know BM=CM by the definition of median. So we have 4BH=4HM= 2BM=2CM=BC. Let’s say BC=4 without loss of generality; we’re only after the angles. Let’s call c=AB=AM,a=BC,b=AC, h=AH, BH=HM=1, BM=CM=2,BC=4 Angles HAM, and CAM are given as equal. From the angle bisector theorem, we get HMAH=CMAC1h=2b or b=2h AHC is a right triangle so 32+h2=b2=4h2 h2=3 We have AH=√3,BH=1 and ABH is a right angle, so AB=√3+1=2 so we know this a 30,60,90 triangle, ∠BAH=30∘. ∠BAC=3∠BAH=90∘, a right angle. Doug Dillon Ph.D. Mathematics · Author has 12.4K answers and 11.4M answer views · 8y The solution exploits the fact that the median is also an angle bisector. The solution exploits the fact that the median is also an angle bisector. Gopal Menon B Sc (Hons) in Mathematics, Indira Gandhi National Open University (IGNOU) (Graduated 2010) · Author has 10.2K answers and 15.2M answer views · 8y Let △ABC be the concerned triangle. Let A be the vertex from which the median and altitude (I presume that the term “latitude” is meant for the altitude) are drawn. Let D and E be points on side BC such that AD is the median and AE is the altitude. It is given that ∠BAE=∠EAD=∠DAC. Let this angle be θ. In △ABD, AE is perpendicular to BD and ∠BAE=∠EAD. So, it is easy to show that △ABF and △ADF are congruent. ⇒AB=AD,andBF=FD=12DC=14 Let △ABC be the concerned triangle. Let A be the vertex from which the median and altitude (I presume that the term “latitude” is meant for the altitude) are drawn. Let D and E be points on side BC such that AD is the median and AE is the altitude. It is given that ∠BAE=∠EAD=∠DAC. Let this angle be θ. In △ABD, AE is perpendicular to BD and ∠BAE=∠EAD. So, it is easy to show that △ABF and △ADF are congruent. ⇒AB=AD,andBF=FD=12DC=14BC In△ABE,tanθ=BEAE=BC4AE In△ACE,tan2θ=ECAE=3BC4AE ⇒tan2θ=3tanθ ⇒2tanθ1−tan2θ=3tanθ ⇒tan2θ=13 ⇒tanθ=1√3 ⇒θ=30o ⇒∠BAC=90o ⇒∠ABC=30o ⇒∠BCA=60o Sponsored by Grammarly Is your writing working as hard as your ideas? Grammarly’s AI brings research, clarity, and structure—so your writing gets sharper with every step. Related questions Find the angles of the triangle ABC in which the height, the median and the bisector taken from the vertex A, divide the angle A into four equal angles? What is the relationship between an angle bisector and a median line in a triangle if the angle bisector divides the triangle into two congruent parts? In the triangle ABC, the altitude, angle bisector, and median from divide angle C into four equal angles. What is the value of angle A, angle B, and angle C in triangle ABC? Can you divide an angle into many equal parts? How do I construct a triangle? Given latitude, median and angle bisector from vertex A. Assistant Bot · 1y To solve this problem, we need to find the angle of a triangle at a vertex where the median and the altitude from that vertex divide the angle into three equal parts. Let's denote the triangle as ABC with vertex A. Given: Let ∠A=θ. The median AD connects vertex A to the midpoint D of side BC. The altitude AE is drawn from vertex A to side BC. Condition: The median AD and the altitude AE divide ∠A into three equal angles: - ∠BAD=∠DAE=∠EAC This means: ∠BAD=∠DAE=∠EAC=θ3 Finding θ: Using Geometric Properties: The medi To solve this problem, we need to find the angle of a triangle at a vertex where the median and the altitude from that vertex divide the angle into three equal parts. Let's denote the triangle as ABC with vertex A. Given: Let ∠A=θ. The median AD connects vertex A to the midpoint D of side BC. The altitude AE is drawn from vertex A to side BC. Condition: The median AD and the altitude AE divide ∠A into three equal angles: - ∠BAD=∠DAE=∠EAC This means: ∠BAD=∠DAE=∠EAC=θ3 Finding θ: Using Geometric Properties: The median AD divides the triangle into two triangles, ABD and ACD. The altitude AE also creates two right triangles, ABE and ACE. Applying the Angle Bisector Theorem: Since AD and AE divide ∠A into three equal parts, we can analyze the relationships between the angles formed. The triangle's angle at A can be expressed as: θ=∠BAD+∠DAE+∠EAC=θ3+θ3+θ3 This leads us to the equation: θ=3(θ3)=θ Using Trigonometric Relations: In triangle ABE, where AE is perpendicular to BC, we have: tan(θ3)=hc2(where h is the height and c is the base) In triangle ACD, using the median: The median can also be expressed in terms of the sides. However, since the median and altitude both split θ equally, we can conclude that: Conclusion: The only angle θ that satisfies this condition is θ=60∘. This is because 60∘ can be divided into three equal angles of 20∘ by both the median and altitude. Thus, the angle ∠A of triangle ABC is: θ=60∘ Doug Dillon Ph.D. Mathematics · Author has 12.4K answers and 11.4M answer views · 8y Related If the altitude, angle bisector and median that are drawn from vertex A of the triangle ABC divides angle A into 4 equal angles, then what is the angle of A? John Steele Author has 10.2K answers and 11.3M answer views · 7y Related How can I calculate the inner angles of a triangle where the median and the altitude from the same vertex divide the angle into three equal angles? A picture would be worth 1000 words if I knew how to attach one. Edit: See handy drawing provided in comments by Martyn Hathaway (many thanks!). It will make the description easier to follow. Draw horizontal line AC of any convenient length with E its midpoint. Place vertex B somewhere above the line, such that the perpendicular falls between A and E (hint: make it exactly midway, we’ll prove it in a moment). Draw BD and BE, we are given that these lines trisect angle B. Given that B is trisected and the altitude is perpendicular to AC, triangle ABE is isosceles, divided into two right triangles A picture would be worth 1000 words if I knew how to attach one. Edit: See handy drawing provided in comments by Martyn Hathaway (many thanks!). It will make the description easier to follow. Draw horizontal line AC of any convenient length with E its midpoint. Place vertex B somewhere above the line, such that the perpendicular falls between A and E (hint: make it exactly midway, we’ll prove it in a moment). Draw BD and BE, we are given that these lines trisect angle B. Given that B is trisected and the altitude is perpendicular to AC, triangle ABE is isosceles, divided into two right triangles. AD = ½AE = ¼AC. Since the sum of the angles of a triangle is 180°, we can conclude A = BEA = 90° -B/3. Angle BEC must be supplementary, leading to angle C = 90° - 2B/3. Now we need B. The altitude is common to triangles ABD and DBC. If AC is unit length, D divides it into 0.25 and 0.75 and 0.25/h = tan(B/3) and 0.75/h = tan(2B/3) I was going to setup Newton’s method to solve but Lucky_Guess 101 gave me B/3 = 30° or B =90°, and A = 60°, C = 30° Given that it turns out to be a 30°-60°-90°, there is probably some simple, elegant method I have overlooked. Reverse-engineering it from the facts was a bit non-trivial. Sponsored by CDW Corporation Want document workflows to be more productive? The new Acrobat Studio turns documents into dynamic workspaces. Adobe and CDW deliver AI for business. Philip Lloyd Specialist Calculus Teacher, Motivator and Baroque Trumpet Soloist. · Author has 6.8K answers and 52.8M answer views · 6y Related Does the median of a triangle bisect the vertex angle? If you have questions like this, you should simply investigate! Draw a few triangles and find out for yourself. Here is what I would do….. The median from A to the mid- point of BC is the line AM Now you KNOW whether the median bisects angle A or not, don’t you? —————————————————————————————————————- Now you should be wondering, “Does the median ever bisect the vertex angle?” What sort of triangles are these two? Can you decide for yourself now? If you have questions like this, you should simply investigate! Draw a few triangles and find out for yourself. Here is what I would do….. The median from A to the mid- point of BC is the line AM Now you KNOW whether the median bisects angle A or not, don’t you? —————————————————————————————————————- Now you should be wondering, “Does the median ever bisect the vertex angle?” What sort of triangles are these two? Can you decide for yourself now? Jaya Sharma Teacher,content Editor at Enchanter (2014–present) · Author has 282 answers and 346.5K answer views · 8y Related If the altitude, angle bisector and median that are drawn from vertex A of the triangle ABC divides angle A into 4 equal angles, then what is the angle of A? Let Angle ACH =Angle HAK =Angle KAM =Angle MAB = x° Since Angle AHC = 90°, in triangle ACH, Angle C = (90 - x)° In triangle AHM, ext. angle AHC = 90° = Angle HAM + Angle AMH = 2x + Angle AMH =90° Therefore, Angle AMH = (90 - 2x )° ———————————(1) From M draw MV parallel to AB. Angle AMV = 3x° ———-(alt. angles) Angle AMB and Angle AMC being linear pair, using (1),We get Angle AMB= 180° - Angle AMC =180° - Angle AMH =( 180 - ( 90 -2x))° =(90 +2x)° Angle AMV = Angle AMB - Angle VMB =( (90 + 2x ) - (90 - x))ï =3x° In triangle AMH, ext. Angle AMB = (90 +2x) AngleBMV = Angle AMB - Angle AMV =(90 + 2x) - 3x = (9 Let Angle ACH =Angle HAK =Angle KAM =Angle MAB = x° Since Angle AHC = 90°, in triangle ACH, Angle C = (90 - x)° In triangle AHM, ext. angle AHC = 90° = Angle HAM + Angle AMH = 2x + Angle AMH =90° Therefore, Angle AMH = (90 - 2x )° ———————————(1) From M draw MV parallel to AB. Angle AMV = 3x° ———-(alt. angles) Angle AMB and Angle AMC being linear pair, using (1),We get Angle AMB= 180° - Angle AMC =180° - Angle AMH =( 180 - ( 90 -2x))° =(90 +2x)° Angle AMV = Angle AMB - Angle VMB =( (90 + 2x ) - (90 - x))ï =3x° In triangle AMH, ext. Angle AMB = (90 +2x) AngleBMV = Angle AMB - Angle AMV =(90 + 2x) - 3x = (90 -x) In triangle AMV,Angle AVM + Angle AMV + Angle MAV = 180 Angle AVM +3x +x =180 Sponsored by CDW Corporation What would an evergreen data storage architecture do for you? Get uninterrupted access to data that is update-proof when you invest in storage from CDW & Pure Storage. Doug Dillon Ph.D. Mathematics · Author has 12.4K answers and 11.4M answer views · 7y Related How can I calculate the inner angles of a triangle where the median and the altitude from the same vertex divide the angle into three equal angles? Locate, on BC , point F so that DF is perpendicular to BC . The triangles BED and BFD are congruent and DF is half of DC . Therefore and C is 30 giving 2 x = 60 and so x = 30 . Locate, on BC , point F so that DF is perpendicular to BC . The triangles BED and BFD are congruent and DF is half of DC . Therefore and C is 30 giving 2 x = 60 and so x = 30 . Ajesh K C Studied at Mar Athanasius College of Engineering, Kothamangalam · Author has 1.3K answers and 3.3M answer views · 5y Related How do you find the angle of a triangle given 3 sides? Use Cosine and Sine Rule Use Cosine and Sine Rule Daniel Ettedgui, DO I fell in love with geometry at 9 years of age. · Author has 1.3K answers and 999.3K answer views · 1y Related In triangle ABC, angle C = 70 degrees; angle A = 45 degrees; AB = 40 m. What is the length of the median drawn from vertex A to side BC? Two steps. Use Law of Sines to calculate length of BC. Now use Law of Cosines to calculate length of median AD. That's it. Two steps. Use Law of Sines to calculate length of BC. SIN70/40=SIN45/BC Now use Law of Cosines to calculate length of median AD. (AD)^2=(BD)^2+1600–2(BD)(40)COS65 That's it. Pradeep Hebbar Many years of Structural Engineering & Math enthusiasm · Author has 9.3K answers and 6.2M answer views · 1y Related In triangle ABC, angle C = 70 degrees; angle A = 45 degrees; AB = 40 m. What is the length of the median drawn from vertex A to side BC? In the given △ABC, A=45∘, C=70∘ and AB=40 ∠B=180∘−(45∘+70∘)=65∘ Let R be the circumradius. By extended sine rule, ABsinC=BCsinA=ACsinB=2R From first ratio, 40sin70∘=2R⟹R=20sin70∘ BCsin45∘=2R⟹BC=2Rsin45∘ ACsin65∘=2R⟹AC=2Rsin65∘ From Apollonius's theorem, AB2+AC2=2(AM2+(BC2)2) 40^2+(2R \sin 65^\circ)^2= 2\left(AM^2+\left(\frac{2R \sin 45^\circ}{2} \right In the given △ABC, A=45∘, C=70∘ and AB=40 ∠B=180∘−(45∘+70∘)=65∘ Let R be the circumradius. By extended sine rule, ABsinC=BCsinA=ACsinB=2R From first ratio, 40sin70∘=2R⟹R=20sin70∘ Further, BCsin45∘=2R⟹BC=2Rsin45∘ ACsin65∘=2R⟹AC=2Rsin65∘ From Apollonius's theorem, AB2+AC2=2(AM2+(BC2)2) 402+(2Rsin65∘)2=2(AM2+(2Rsin45∘2)2) By substituting R, AM≈36.29974 Ans: ≈36.29974 m Related questions How can I calculate the inner angles of a triangle where the median and the altitude from the same vertex divide the angle into three equal angles? The median of an isosceles triangle makes two equal angles at the vertex it is drawn from, but how do you find the angles it makes in general? If the altitude, angle bisector and median that are drawn from vertex A of the triangle ABC divides angle A into 4 equal angles, then what is the angle of A? How do I find the vertex angle of a star? Where does the vertex of an angle lie? Find the angles of the triangle ABC in which the height, the median and the bisector taken from the vertex A, divide the angle A into four equal angles? What is the relationship between an angle bisector and a median line in a triangle if the angle bisector divides the triangle into two congruent parts? In the triangle ABC, the altitude, angle bisector, and median from divide angle C into four equal angles. What is the value of angle A, angle B, and angle C in triangle ABC? Can you divide an angle into many equal parts? How do I construct a triangle? Given latitude, median and angle bisector from vertex A. What is the total interior angle of a regular polygon if 32 triangles can be drawn from the same vertex? What is an angle whose vertex is the center of a circle? How will you prove that the median drawn from the vertex of the right angle of a right angled triangle is equal to the half of hypotenuse? How do you find the angles of a triangle? How can an angle be divided into two equal parts? About · Careers · Privacy · Terms · Contact · Languages · Your Ad Choices · Press · © Quora, Inc. 2025
189392
https://www.ncbi.nlm.nih.gov/books/NBK535447/
An official website of the United States government The .gov means it's official. Federal government websites often end in .gov or .mil. Before sharing sensitive information, make sure you're on a federal government site. The site is secure. The https:// ensures that you are connecting to the official website and that any information you provide is encrypted and transmitted securely. Log in Account Logged in as:username Dashboard Publications Account settings Log out Access keys NCBI Homepage MyNCBI Homepage Main Content Main Navigation Browse Titles Advanced Help Disclaimer NCBI Bookshelf. A service of the National Library of Medicine, National Institutes of Health. StatPearls [Internet]. Treasure Island (FL): StatPearls Publishing; 2025 Jan-. StatPearls [Internet]. Show details Treasure Island (FL): StatPearls Publishing; 2025 Jan-. Naegleria Infection and Primary Amebic Meningoencephalitis Najwa Pervin; Vidya Sundareshan. Author Information and Affiliations Authors Najwa Pervin1; Vidya Sundareshan2. Affiliations 1 Southern Illinois University 2 SIU School of Medicine Last Update: February 16, 2025. Continuing Education Activity Naegleria fowleri, commonly known as the "brain-eating amoeba," is a free-living, thermophilic protozoan that belongs to the phylum Percolozoa. This deadly pathogen causes primary amebic meningoencephalitis (PAM), which is a rare yet rapidly fatal infection of the central nervous system. N fowleri thrives in warm freshwater environments, such as lakes, rivers, and poorly chlorinated pools, and usually enters the body through the nasal passages during water-related activities. Infection occurs when contaminated water enters the nasal passages, allowing the amoeba to migrate to the brain. This causes severe inflammation and necrosis, leading to a nearly universal mortality rate if not diagnosed and treated promptly. Diagnosis is challenging due to nonspecific symptoms resembling viral or bacterial infections. Prompt recognition, accurate diagnosis, and aggressive combination therapy are crucial for improving outcomes, although survival remains rare. This activity provides an in-depth review of N fowleri, covering its epidemiology, pathophysiology, clinical presentation, diagnostic challenges, and current treatment strategies. This activity also emphasizes the importance of collaboration and the critical role of the interprofessional healthcare team in promptly identifying, diagnosing, and managing PAM to initiate appropriate interventions and improve patient outcomes. Objectives: Identify the early symptoms of primary amebic meningoencephalitis caused by Naegleria fowleri, including fever, headache, and nausea, to improve early recognition. Differentiate Naegleria fowleri infections from bacterial, viral, and other protozoal central nervous system infections based on cerebrospinal fluid analysis, imaging, and clinical history. Apply current treatment guidelines for managing primary amebic meningoencephalitis, including antimicrobial therapy and supportive care, to initiate prompt treatment. Collaborate with the interprofessional healthcare team, including interventional radiologists, infectious disease specialists, neurologists, and intensivists, to ensure comprehensive care for patients with PAM. Access free multiple choice questions on this topic. Introduction Naegleria fowleri is a free-living, eukaryotic amoeba belonging to the genus Naegleriaand is commonly known as the "brain-eating amoeba." N fowleriis a facultative parasite, meaning it does not require a host to complete its life cycle and reproduces by mitosis. The 4 free-living amoebae—N fowleri, Balamuthia mandrillaris, Sappinia spp, and Acanthamoeba spp—are responsible for human amebic meningoencephalitis, which are severe and often fatal infections of the central nervous system (CNS). Of the 47 species of Naegleria, only N fowleri causes primary amebic meningoencephalitis (PAM), which is a rare but rapidly progressing and almost always fatal CNS infection, leading to the death of patients within 3 to 7 days. Nfowleriwas named after Malcolm Fowler, who first documented a case of PAM in Australia. N fowleri is a thermophilic amoeba that thrives in temperatures up to 45 °C and is commonly found in warm freshwater environments, including soil, lakes, and rivers. This amoeba can also be present in inadequately chlorinated water, such as swimming pools and tap water. N fowleri causes PAM when contaminated freshwater enters the nasal passages, usually during activities such as swimming in freshwater bodies, using poorly chlorinated pools, or performing nasal rinses with nonsterile water. After entering the nose, the amoeba migrates along the olfactory nerves, crosses the cribriform plate, and invades the CNS, resulting in PAM. The infection is characterized by cerebral edema, hemorrhagic necrosis, brain herniation, and, in most cases, death. PAM is challenging to diagnose clinically, as its initial symptoms are nonspecific and can resemble flu-like illness, bacterial meningitis, or viral meningoencephalitis. A high index of suspicion is essential, and clinicians must be aware of the epidemiology and potential exposures to avoid missed diagnoses. PAM progresses rapidly and is almost always fatal. Given the severity, high fatality rate, and limited treatment options of PAM, obtaining a thorough epidemiological history and maintaining clinician awareness are crucial for prompt diagnosis and timely administration of available therapies. Early recognition and intervention offer the best chance of improving patient outcomes, although survival remains rare. Etiology N fowleriis the causative agent of PAM, which is a rare but rapidly progressive and usually fatal infection of the CNS. Although Acanthamoebaspp and B mandrillaris are opportunistic pathogens that cause granulomatous amebic encephalitis in immunocompetent and immunocompromised individuals, N fowleri primarily affects healthy, immunocompetent children and young adults who have been exposed to contaminated freshwater. Widely distributed worldwide, Nfowlerithrives in warm freshwater environments as a thermophilic ameboflagellate, growing at temperatures up to 45 °C. This amoeba is commonly found in soil, lakes, and rivers and can also persist in inadequately chlorinated water sources, such as swimming pools, tap water, and industrial wastewater. The life cycle of N fowleri consists of 3 stages—trophozoites, flagellates, and cysts. The trophozoite stage is the reproductive and invasive form responsible for human infection. As a thermophilic organism, it is most active during the warm summer when human exposure is more likely. Trophozoites are long and slender, approximately 22 µm long, equipped with pseudopodia for movement and bacterial ingestion, and capable of forming large colonies. Under poor nutritive conditions, trophozoites transition into flagellates by developing flagella. If unfavorable conditions persist, they further transform into metabolically inactive cysts, measuring 7 to 12 μm. These cysts are highly resistant, capable of withstanding low temperatures, and can survive during winter. Trophozoites feed on bacteria and fungi in warm waters and can encyst and settle into sediments when water temperatures drop during winter. Of the 47 known species of Naegleria, onlyN fowleri causes PAM, which is a rare but aggressive infection that typically leads to death within 3 to 7 days. Infection occurs when contaminated water enters the nasal passages, usually through freshwater exposure or nasal rinsing with nonsterile water. Once inside, N fowleri migrates to the CNS via the olfactory nerves and cribriform plate, leading to severe and nearly always fatal infection. Notably, ingestion of N fowleri does not cause infection. Epidemiology N fowleri is ubiquitous and has been found worldwide in soil and freshwater environments, including lakes, hot water springs, poorly chlorinated pools, and thermally polluted water bodies. Although cases of N fowleri PAM have been reported on all continents, the true incidence is likely underdiagnosed or underreported. The disease has an incubation period of 1 to 14 days and progresses rapidly, presenting with nonspecific signs and symptoms. A high index of suspicion is essential for diagnosis, as mortality reaches 98%, with a median time from symptom onset to death of 5 days. In many cases, death occurs before a diagnosis is made, and the disease is confirmed postmortem if an autopsy is performed. Diagnosing PAM requires specialized laboratory tests on cerebrospinal fluid (CSF) or brain tissue, which must be conducted promptly. However, many hospitals lack the necessary diagnostic resources. Establishing surveillance systems and classifying PAM as a notifiable disease would improve data collection and facilitate early recognition. Globally, N fowleri infections are more prevalent in healthy, immunocompetent young males during the summer when water temperatures are high and people engage in outdoor water activities. A recent study analyzing global epidemiological data from published and unpublished reports and the Centers for Disease Control and Prevention (CDC) Free-Living Ameba Surveillance System (1962–2018) identified 381 cases of PAM caused by N fowleri—157 from the United States and 223 from 32 other countries. Of these, 182 cases were confirmed, 89 were probable, and 110 were suspect. The highest number of cases worldwide occurred in the United States, Pakistan, and Mexico. In the United States, N fowleri has been responsible for 157 known cases from 1962 to 2022, with only 4 survivors. Annually, 0 to 8 laboratory-confirmed cases are reported, and PAM accounts for fewer than 0.5% of diagnosed encephalitis-related deaths. Historically, most N fowleri infections have been reported in southern states, particularly Florida, Texas, and southern California. However, recent cases have been detected in northern states, such as Maryland, Indiana, northern California, Connecticut, and Minnesota, potentially due to rising temperatures and increased waterborne exposure. Most N fowleri infections occur through recreational freshwater exposure, such as swimming or diving. However, rare cases have been linked to household water use. In Arizona, 2 children contracted PAM while bathing at home, and N fowleri was traced to an untreated community well-water system. Pathophysiology N fowleri is a thermophilic amoeba that thrives in temperatures up to 45 °C and is widely distributed in warm freshwater environments. This amoeba is commonly found in soil, lakes, and rivers but can also persist in inadequately chlorinated water sources, such as swimming pools and tap water. N fowleriinfection occurs when contaminated freshwater enters the nasal passages, typically during swimming in freshwater bodies, using poorly chlorinated pools, or performing nasal rinses with nonsterile water. After entering through the nose, N fowleri migrates via the olfactory nerves, crosses the cribriform plate, and invades the CNS, thereby causing PAM. This infection is characterized by cerebral edema, hemorrhagic necrosis, herniation, and, in most cases, death. The most severely affected regions of the brain include the olfactory bulbs, the basilar portion of the frontal cerebrum, and the cerebellum. N fowleri triggers a strong innate immune response, and its virulence is influenced by multiple factors, including the protein Nfa1, nitric oxide production, and pore-forming proteins. The Nfa1 protein facilitates amebic attachment to target cells, while specialized feeding structures enable the amoeba to ingest bacteria and fungi in the environment and directly phagocytose brain cells. The organism further contributes to tissue destruction by secreting cytolytic molecules, including cysteine proteases, phospholipases, and phospholipolytic enzymes, which mediate extensive necrosis. This intense immune response and aggressive virulence of N fowleri result in significant destruction of brain parenchymal tissue, leading to the rapid progression of PAM. History and Physical A detailed patient history and thorough physical examination are crucial for diagnosing PAM. The disease is challenging to identify clinically, as its early symptoms are often nonspecific and can resemble flu-like illnesses, bacterial meningitis, or viral meningoencephalitis. Clinicians must maintain a high index of suspicion and be aware of epidemiological risk factors and potential exposures, as failure to recognize PAM can result in delayed diagnosis and rapid fatality. Given its severity, high fatality rate, and limited treatment options, obtaining an accurate epidemiological history is crucial. Clinician awareness is necessary for prompt diagnosis and the immediate initiation of available therapies. The incubation period for PAM ranges from 1 to 14 days. Early symptoms, such as fever, headache, lethargy, nausea, and vomiting, may resemble bacterial meningitis. As the disease progresses, more severe manifestations develop, including confusion, neck stiffness, photophobia, seizures, and cranial nerve abnormalities. In most cases, PAM progresses rapidly, leading to coma and death within a matter of days. Evaluation Diagnosing PAM is challenging, as patients do not always present with the classic symptoms of meningitis. Early symptoms are often vague and nonspecific, resembling mild viral or bacterial infections. Patients may initially experience fever and headache, which can progress to meningismus and confusion, depending on the stage of the disease at the time of presentation. Given its rapid progression and high fatality rate, PAM should be suspected in any patient with meningoencephalitis or meningitis who has a recent history of freshwater exposure. Laboratory findings in PAM can be nonspecific. CSF analysis often mimics bacterial meningitis, showing low to normal glucose levels, elevated protein, and leukocytosis with a polymorphonuclear predominance. CSF pressures are typically elevated, with recorded values reaching up to 600 mm H2O in some cases. However, standard laboratory tests may not be sufficient for diagnosis. Rapid diagnosis is critical due to the aggressive nature of PAM. Few laboratories in the United States and worldwide can test for N fowleri. Even when specimens are sent to reference laboratories, the disease often progresses too quickly for the results to be clinically useful. If CSF is obtained, several diagnostic tests can help identify N fowleri. A quick method involves visualizing trophozoites directly using a wet mount. Staining techniques, including Giemsa, Wright, hematoxylin and eosin (H&E), and periodic acid–Schiff (PAS), can also be utilized. Polymerase chain reaction (PCR) testing can detect N fowleri DNA directly from CSF and is now considered the gold standard for PAM diagnosis. However, PCR limitations include limited availability and the necessity of clinical suspicion. Gram staining and standard bacterial cultures are unreliable, as the fixation process destroys the amoebae. Antigen detection via immunohistochemical staining can also be performed on CSF samples. Additional diagnostic techniques for tissue samples include immunohistochemical staining, indirect immunofluorescence, PCR, and culture. Serological testing is generally not useful, as PAM progresses too rapidly for results to be available before the patient’s death. Magnetic resonance imaging (MRI) may provide some insight, although findings are often nonspecific in the early stages. MRI may show diffuse hemorrhages, infarctions, or contrast enhancement in affected brain regions as the infection advances. A brain biopsy, if feasible, can provide definitive evidence of PAM. Histopathological examination typically reveals hemorrhagic meningitis, with the base of the brain and olfactory bulbs being the most severely affected areas. Numerous trophozoites are often observed within the necrotic meninges. Metagenomic next-generation sequencing has recently gained attention as a promising diagnostic tool for free-living amoebae, including N fowleri. This shotgun sequencing approach enables broad pathogen detection from CSF, tissue, and serum samples without requiring large specimen volumes. Microbial cell-free DNA analysis has also been explored for detecting free-living amoebae. A 2023 report identified cases of Acanthamoeba spp, B mandrillaris, and N fowleri between 2018 and 2021 using this technique. These emerging sequencing technologies have the potential to reduce diagnostic delays and improve early detection, ultimately enhancing patient outcomes. Treatment / Management Treatment options for PAM remain limited due to the lack of controlled trials or clinical studies. Current therapeutic approaches are based on in-vitro studies and case reports. The recommended treatment regimen typically involves a combination of 5 or 6 antimicrobial or antifungal agents that either demonstrate in-vitro activity against N fowleri or have been associated with survival in patients. Miltefosine is the most recent addition to this regimen, which commonly includes amphotericin B, azithromycin, fluconazole, rifampin, and dexamethasone. However, the efficacy of these drugs remains inconsistent, and their use is often linked to significant toxicity. Amphotericin B is the primary drug of choice for PAN, exhibiting amebicidal activity at low concentrations. This drug is recommended to be used in combination with at least one other amebicidal agent. The CDC advises the use of conventional amphotericin B over the liposomal formulation for both intrathecal and intravenous administration, as in-vitro studies indicate a higher minimum inhibitory concentration (MIC) for the liposomal formulation compared to conventional amphotericin B. Miltefosine, originally developed as an antineoplastic agent for breast cancer, has shown promise in treating N fowleri infections. Notably, 2 patients who received miltefosine in 2013 and another in 2016 survived, marking a significant milestone in PAM treatment. Although miltefosine was initially accessible only through the CDC, it is now commercially available. Azoles, such as fluconazole and voriconazole, penetrate the CNS effectively and are commonly used as adjunctive therapy with amphotericin B. Azithromycin has demonstrated in-vitro and in-vivo activity against N fowleri, with some studies suggesting a synergistic effect when combined with amphotericin B. Rifampin exhibits amebicidal activity against N fowleri in vitro and enhances the effectiveness of amphotericin B when administered intravenously. It is commonly included in standard PAM therapy; however, clinicians should be mindful of potential drug-drug interactions. Rifampin induces hepatic cytochrome P450 enzymes, which can affect the metabolism of azoles and amphotericin B. Controlled hypothermia has been used in recent cases of PAM, with 3 children surviving the infection in 2013 and 2016 after receiving it as part of their treatment. These patients achieved full neurological recovery and were able to return to school, suggesting that controlled hypothermia may have a role in improving outcomes. Differential Diagnosis Patients with PAM do not always exhibit the classic symptoms of meningitis, making diagnosis challenging. Early symptoms are often vague and nonspecific, resembling mild viral or bacterial infections. Patients may initially present with fever, headache, and general malaise, which can progress to meningismus and confusion depending on the disease's stage at presentation. PAM should be suspected in any patient presenting with meningoencephalitis or meningitis, particularly if there is a recent history of freshwater exposure. A thorough patient history is vital for identifying potential risk factors, and clinician awareness is essential for ensuring timely recognition and intervention. Prognosis PAM is a rare but rapidly progressing disease with a poor prognosis, even when treatment is administered. N fowleri infection is associated with a mortality rate exceeding 98%, with a median time from symptom onset to death of just 5 days. Of the 157 reported cases in the United States between 1962 and 2022, only 4 patients survived. Diagnosing PAM is challenging due to limited awareness among healthcare professionals. The disease is rarely considered in the differential diagnosis, as its early symptoms resemble viral or bacterial meningoencephalitis and flu-like syndromes. Moreover, laboratory resources to confirm the diagnosis are often unavailable. With no standardized, highly effective treatments, PAM remains nearly universally fatal. Complications N fowleri infection causes PAM, which is a rapidly progressive and often fatal disease. The infection leads to severe cerebral edema, hemorrhagic necrosis, and increased intracranial pressure, which can result in brain herniation and death within days of symptom onset. Other complications include seizures, cranial nerve dysfunction, hydrocephalus, and coma. Due to its aggressive nature, PAM has a 98% mortality rate, with most patients succumbing despite treatment. Survivors may experience neurological deficits, although recovery is rare. Early diagnosis and prompt initiation of therapy are critical in preventing fatal outcomes and minimizing long-term complications. Consultations Managing PAM requires a multidisciplinary approach. Specialists to consult include infectious disease physicians, neurologists, neuroradiologists, interventional radiologists, pharmacists, and intensivists, especially if a cerebral biopsy is needed. Deterrence and Patient Education To prevent infection with N fowleri, individuals should avoid activities that introduce fresh water into the nasal passages, especially during summer. When using non-sterile tap water or swimming in freshwater bodies or poorly chlorinated pools, it is important to keep the head above water or use nose clips. For nasal irrigation, such as with a neti pot or during religious practices such as ablution, only sterile, distilled, filtered, or previously boiled water should be used. Additionally, ensuring proper disinfection of water bodies, including swimming pools and public water systems with chlorination, can help prevent infection. Pearls and Other Issues Key facts to keep in mind about N fowleri include: N fowleri is a thermophilic, free-living amoeba found in warm freshwater bodies, such as lakes, hot springs, poorly chlorinated pools, or tap water. This amoeba is the causative agent of PAM. N fowleri enters the body through the nasal passages during activities such as swimming, diving, or nasal rinsing with contaminated water. This amoeba then crosses the cribriform plate and travels along the olfactory nerve to invade the central nervous system, causing rapidly progressive necrotizing meningoencephalitis. Early symptoms of N fowleri infection mimic viral or bacterial meningitis, including fever, headache, nausea, and vomiting. These symptoms rapidly progress to confusion, seizures, photophobia, meningismus, altered mental status, and coma. The disease is fatal in over 98% of cases, with a median survival of just 5 days after symptom onset. PCR is the gold standard for definitive diagnosis. CSF findings resemble those of bacterial meningitis, showing elevated white blood cells, low to normal glucose, elevated protein, and increased opening pressure. A wet mount will show motile trophozoites. Treatment involves a combination therapy, such as the administration of amphotericin B and miltefosine. Enhancing Healthcare Team Outcomes Accurate and timely diagnosis and management of PAM require coordinated effort and collaboration among an interprofessional healthcare team. Prompt consultation with an infectious disease specialist is crucial for effective treatment. Clinicians should also closely monitor patients and perform frequent assessments as the disease progresses rapidly. Patients experiencing neurological deterioration and requiring mechanical ventilation should be managed in the intensive care unit (ICU) for optimal care. Additionally, collaboration with pharmacists is vital, as the medications used to treat PAM may have significant drug interactions that could reduce their therapeutic effectiveness. Review Questions Access free multiple choice questions on this topic. Click here for a simplified version. Comment on this article. References 1. : Grace E, Asbill S, Virga K. Naegleria fowleri: pathogenesis, diagnosis, and treatment options. Antimicrob Agents Chemother. 2015 Nov;59(11):6677-81. [PMC free article: PMC4604384] [PubMed: 26259797] 2. : Zhang H, Cheng X. Various brain-eating amoebae: the protozoa, the pathogenesis, and the disease. Front Med. 2021 Dec;15(6):842-866. [PubMed: 34825341] 3. : Kofman A, Guarner J. Infections Caused by Free-Living Amoebae. J Clin Microbiol. 2022 Jan 19;60(1):e0022821. [PMC free article: PMC8769735] [PubMed: 34133896] 4. : Gharpure R, Bliton J, Goodman A, Ali IKM, Yoder J, Cope JR. Epidemiology and Clinical Characteristics of Primary Amebic Meningoencephalitis Caused by Naegleria fowleri: A Global Review. Clin Infect Dis. 2021 Jul 01;73(1):e19-e27. [PMC free article: PMC8739754] [PubMed: 32369575] 5. : Vanden Esschert KL, Mattioli MC, Hilborn ED, Roberts VA, Yu AT, Lamba K, Arzaga G, Zahn M, Marsh Z, Combes SM, Smith ES, Robinson TJ, Gretsch SR, Laco JP, Wikswo ME, Miller AD, Tack DM, Wade TJ, Hlavsa MC. Outbreaks Associated with Untreated Recreational Water - California, Maine, and Minnesota, 2018-2019. MMWR Morb Mortal Wkly Rep. 2020 Jun 26;69(25):781-783. [PMC free article: PMC7316318] [PubMed: 32584799] 6. : Fowler M, Carter RF. Acute pyogenic meningitis probably due to Acanthamoeba sp.: a preliminary report. Br Med J. 1965 Sep 25;2(5464):740-2. [PMC free article: PMC1846173] [PubMed: 5825411] 7. : Maclean RC, Richardson DJ, LePardo R, Marciano-Cabral F. The identification of Naegleria fowleri from water and soil samples by nested PCR. Parasitol Res. 2004 Jun;93(3):211-7. [PubMed: 15138806] 8. : Gharpure R, Gleason M, Salah Z, Blackstock AJ, Hess-Homeier D, Yoder JS, Ali IKM, Collier SA, Cope JR. Geographic Range of Recreational Water-Associated Primary Amebic Meningoencephalitis, United States, 1978-2018. Emerg Infect Dis. 2021 Jan;27(1):271-274. [PMC free article: PMC7774533] [PubMed: 33350926] 9. : Centers for Disease Control and Prevention (CDC). Notes from the field: primary amebic meningoencephalitis associated with ritual nasal rinsing--St. Thomas, U.S. Virgin islands, 2012. MMWR Morb Mortal Wkly Rep. 2013 Nov 15;62(45):903. [PMC free article: PMC4585351] [PubMed: 24226628] 10. : Visvesvara GS, Moura H, Schuster FL. Pathogenic and opportunistic free-living amoebae: Acanthamoeba spp., Balamuthia mandrillaris, Naegleria fowleri, and Sappinia diploidea. FEMS Immunol Med Microbiol. 2007 Jun;50(1):1-26. [PubMed: 17428307] 11. : Güémez A, García E. Primary Amoebic Meningoencephalitis by Naegleria fowleri: Pathogenesis and Treatments. Biomolecules. 2021 Sep 06;11(9) [PMC free article: PMC8469197] [PubMed: 34572533] 12. : Griffin JL. Temperature tolerance of pathogenic and nonpathogenic free-living amoebas. Science. 1972 Nov 24;178(4063):869-70. [PubMed: 5085984] 13. : Yoder JS, Straif-Bourgeois S, Roy SL, Moore TA, Visvesvara GS, Ratard RC, Hill VR, Wilson JD, Linscott AJ, Crager R, Kozak NA, Sriram R, Narayanan J, Mull B, Kahler AM, Schneeberger C, da Silva AJ, Poudel M, Baumgarten KL, Xiao L, Beach MJ. Primary amebic meningoencephalitis deaths associated with sinus irrigation using contaminated tap water. Clin Infect Dis. 2012 Nov;55(9):e79-85. [PMC free article: PMC11307261] [PubMed: 22919000] 14. : Graciaa DS, Cope JR, Roberts VA, Cikesh BL, Kahler AM, Vigar M, Hilborn ED, Wade TJ, Backer LC, Montgomery SP, Secor WE, Hill VR, Beach MJ, Fullerton KE, Yoder JS, Hlavsa MC. Outbreaks Associated with Untreated Recreational Water - United States, 2000-2014. MMWR Morb Mortal Wkly Rep. 2018 Jun 29;67(25):701-706. [PMC free article: PMC6023190] [PubMed: 29953425] 15. : Cope JR, Murphy J, Kahler A, Gorbett DG, Ali I, Taylor B, Corbitt L, Roy S, Lee N, Roellig D, Brewer S, Hill VR. Primary Amebic Meningoencephalitis Associated With Rafting on an Artificial Whitewater River: Case Report and Environmental Investigation. Clin Infect Dis. 2018 Feb 01;66(4):548-553. [PMC free article: PMC5801760] [PubMed: 29401275] 16. : Matanock A, Mehal JM, Liu L, Blau DM, Cope JR. Estimation of Undiagnosed Naegleria fowleri Primary Amebic Meningoencephalitis, United States1. Emerg Infect Dis. 2018 Jan;24(1):162-164. [PMC free article: PMC5749439] [PubMed: 29260676] 17. : Haston JC, Cope JR. Amebic encephalitis and meningoencephalitis: an update on epidemiology, diagnostic methods, and treatment. Curr Opin Infect Dis. 2023 Jun 01;36(3):186-191. [PMC free article: PMC10798061] [PubMed: 37093056] 18. : Vugia DJ, Richardson J, Tarro T, Vareechon C, Pannaraj PS, Traub E, Cope JR, Balter S. Notes from the Field: Fatal Naegleria fowleri Meningoencephalitis After Swimming in Hot Spring Water - California, 2018. MMWR Morb Mortal Wkly Rep. 2019 Sep 13;68(36):793-794. [PMC free article: PMC6753969] [PubMed: 31513557] 19. : Kemble SK, Lynfield R, DeVries AS, Drehner DM, Pomputius WF, Beach MJ, Visvesvara GS, da Silva AJ, Hill VR, Yoder JS, Xiao L, Smith KE, Danila R. Fatal Naegleria fowleri infection acquired in Minnesota: possible expanded range of a deadly thermophilic organism. Clin Infect Dis. 2012 Mar;54(6):805-9. [PubMed: 22238170] 20. : Blair B, Sarkar P, Bright KR, Marciano-Cabral F, Gerba CP. Naegleria fowleri in well water. Emerg Infect Dis. 2008 Sep;14(9):1499-501. [PMC free article: PMC2603111] [PubMed: 18760036] 21. : Marciano-Cabral F, Cabral GA. The immune response to Naegleria fowleri amebae and pathogenesis of infection. FEMS Immunol Med Microbiol. 2007 Nov;51(2):243-59. [PubMed: 17894804] 22. : Rehman SU, Farooq S, Tariq MB, Nasir N, Wasay M, Masood S, Karim M. Clinical manifestations and outcome of patients with primary amoebic meningoencephalitis in Pakistan. A single-center experience. PLoS One. 2023;18(11):e0290394. [PMC free article: PMC10631667] [PubMed: 37939056] 23. : Visvesvara GS. Amebic meningoencephalitides and keratitis: challenges in diagnosis and treatment. Curr Opin Infect Dis. 2010 Dec;23(6):590-4. [PubMed: 20802332] 24. : Capewell LG, Harris AM, Yoder JS, Cope JR, Eddy BA, Roy SL, Visvesvara GS, Fox LM, Beach MJ. Diagnosis, Clinical Course, and Treatment of Primary Amoebic Meningoencephalitis in the United States, 1937-2013. J Pediatric Infect Dis Soc. 2015 Dec;4(4):e68-75. [PubMed: 26582886] 25. : Park SY, Chang EJ, Ledeboer N, Messacar K, Lindner MS, Venkatasubrahmanyam S, Wilber JC, Vaughn ML, Bercovici S, Perkins BA, Nolte FS. Plasma Microbial Cell-Free DNA Sequencing from over 15,000 Patients Identified a Broad Spectrum of Pathogens. J Clin Microbiol. 2023 Aug 23;61(8):e0185522. [PMC free article: PMC10446866] [PubMed: 37439686] 26. : Bellini NK, Santos TM, da Silva MTA, Thiemann OH. The therapeutic strategies against Naegleria fowleri. Exp Parasitol. 2018 Apr;187:1-11. [PubMed: 29501696] 27. : Nau R, Prange HW, Menck S, Kolenda H, Visser K, Seydel JK. Penetration of rifampicin into the cerebrospinal fluid of adults with uninflamed meninges. J Antimicrob Chemother. 1992 Jun;29(6):719-24. [PubMed: 1506352] 28. : Schuster FL, Guglielmo BJ, Visvesvara GS. In-vitro activity of miltefosine and voriconazole on clinical isolates of free-living amebas: Balamuthia mandrillaris, Acanthamoeba spp., and Naegleria fowleri. J Eukaryot Microbiol. 2006 Mar-Apr;53(2):121-6. [PubMed: 16579814] 29. : Hannisch W, Hallagan LF. Primary amebic meningoencephalitis: a review of the clinical literature. Wilderness Environ Med. 1997 Nov;8(4):211-3. [PubMed: 11990164] 30. : Cárdenas-Zúñiga R, Silva-Olivares A, Villalba-Magdaleno JA, Sánchez-Monroy V, Serrano-Luna J, Shibayama M. Amphotericin B induces apoptosis-like programmed cell death in Naegleria fowleri and Naegleria gruberi. Microbiology (Reading). 2017 Jul;163(7):940-949. [PubMed: 28721850] 31. : Centers for Disease Control and Prevention (CDC). Investigational drug available directly from CDC for the treatment of infections with free-living amebae. MMWR Morb Mortal Wkly Rep. 2013 Aug 23;62(33):666. [PMC free article: PMC4604798] [PubMed: 23965830] 32. : Alli A, Ortiz JF, Morillo Cox Á, Armas M, Orellana VA. Miltefosine: A Miracle Drug for Meningoencephalitis Caused by Free-Living Amoebas. Cureus. 2021 Mar 04;13(3):e13698. [PMC free article: PMC8020194] [PubMed: 33833918] 33. : Linam WM, Ahmed M, Cope JR, Chu C, Visvesvara GS, da Silva AJ, Qvarnstrom Y, Green J. Successful treatment of an adolescent with Naegleria fowleri primary amebic meningoencephalitis. Pediatrics. 2015 Mar;135(3):e744-8. [PMC free article: PMC4634363] [PubMed: 25667249] 34. : Cope JR, Conrad DA, Cohen N, Cotilla M, DaSilva A, Jackson J, Visvesvara GS. Use of the Novel Therapeutic Agent Miltefosine for the Treatment of Primary Amebic Meningoencephalitis: Report of 1 Fatal and 1 Surviving Case. Clin Infect Dis. 2016 Mar 15;62(6):774-6. [PMC free article: PMC4775347] [PubMed: 26679626] 35. : Ali M, Jamal SB, Farhat SM. Naegleria fowleri in Pakistan. Lancet Infect Dis. 2020 Jan;20(1):27-28. [PubMed: 31876498] 36. : Dorsch MM, Cameron AS, Robinson BS. The epidemiology and control of primary amoebic meningoencephalitis with particular reference to South Australia. Trans R Soc Trop Med Hyg. 1983;77(3):372-7. [PubMed: 6623596] : Disclosure: Najwa Pervin declares no relevant financial relationships with ineligible companies. : Disclosure: Vidya Sundareshan declares no relevant financial relationships with ineligible companies. Copyright © 2025, StatPearls Publishing LLC. This book is distributed under the terms of the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International (CC BY-NC-ND 4.0) ( ), which permits others to distribute the work, provided that the article is not altered or used commercially. You are not required to obtain permission to distribute this article, provided that you credit the author and journal. Bookshelf ID: NBK535447PMID: 30571068 Share Views PubReader Print View Cite this Page Pervin N, Sundareshan V. Naegleria Infection and Primary Amebic Meningoencephalitis. [Updated 2025 Feb 16]. In: StatPearls [Internet]. Treasure Island (FL): StatPearls Publishing; 2025 Jan-. In this Page Continuing Education Activity Introduction Etiology Epidemiology Pathophysiology History and Physical Evaluation Treatment / Management Differential Diagnosis Prognosis Complications Consultations Deterrence and Patient Education Pearls and Other Issues Enhancing Healthcare Team Outcomes Review Questions References Related information PMC PubMed Central citations PubMed Links to PubMed Similar articles in PubMed Review Amebic infections of the central nervous system.[J Neurovirol. 2022] Review Amebic infections of the central nervous system. Berger JR. J Neurovirol. 2022 Dec; 28(4-6):467-472. Epub 2022 Sep 13. Primary Amebic Meningoencephalitis Associated With Rafting on an Artificial Whitewater River: Case Report and Environmental Investigation.[Clin Infect Dis. 2018] Primary Amebic Meningoencephalitis Associated With Rafting on an Artificial Whitewater River: Case Report and Environmental Investigation. Cope JR, Murphy J, Kahler A, Gorbett DG, Ali I, Taylor B, Corbitt L, Roy S, Lee N, Roellig D, et al. Clin Infect Dis. 2018 Feb 1; 66(4):548-553. Molecular identification of Acanthamoeba spp., Balamuthia mandrillaris and Naegleria fowleri in soil samples using quantitative real-time PCR assay in Turkey; Hidden danger in the soil![Acta Trop. 2023] Molecular identification of Acanthamoeba spp., Balamuthia mandrillaris and Naegleria fowleri in soil samples using quantitative real-time PCR assay in Turkey; Hidden danger in the soil! Aykur M, Dagci H. Acta Trop. 2023 Aug; 244:106956. Epub 2023 May 25. The first association of a primary amebic meningoencephalitis death with culturable Naegleria fowleri in tap water from a US treated public drinking water system.[Clin Infect Dis. 2015] The first association of a primary amebic meningoencephalitis death with culturable Naegleria fowleri in tap water from a US treated public drinking water system. Cope JR, Ratard RC, Hill VR, Sokol T, Causey JJ, Yoder JS, Mirani G, Mull B, Mukerjee KA, Narayanan J, et al. Clin Infect Dis. 2015 Apr 15; 60(8):e36-42. Epub 2015 Jan 16. Review Free-living, amphizoic and opportunistic amebas.[Brain Pathol. 1997] Review Free-living, amphizoic and opportunistic amebas. Martinez AJ, Visvesvara GS. Brain Pathol. 1997 Jan; 7(1):583-98. See reviews...See all... Recent Activity Clear)Turn Off)Turn On) Naegleria Infection and Primary Amebic Meningoencephalitis - StatPearls Naegleria Infection and Primary Amebic Meningoencephalitis - StatPearls Your browsing activity is empty. Activity recording is turned off. Turn recording back on) See more... Follow NCBI Connect with NLM National Library of Medicine8600 Rockville Pike Bethesda, MD 20894 Web Policies FOIA HHS Vulnerability Disclosure Help Accessibility Careers
189393
https://www.hrsonline.org/wp-content/uploads/2025/02/2019-HRS-Management-of-ACM.pdf
2019 HRS expert consensus statement on evaluation, risk stratification, and management of arrhythmogenic cardiomyopathy Jeffrey A. Towbin, MS, MD (Chair),1,2 William J. McKenna, MD, DSc (Vice-Chair),3 Dominic J. Abrams, MD, MRCP, MBA,4 Michael J. Ackerman, MD, PhD,5, Hugh Calkins, MD, FHRS, CCDS,6 Francisco C.C. Darrieux, MD, PhD,7,† James P. Daubert, MD, FHRS,8 Christian de Chillou, MD, PhD,9,‡ Eugene C. DePasquale, MD,10,x Milind Y. Desai, MD,11,{ N.A. Mark Estes III, MD, FHRS, CCDS,12 Wei Hua, MD, FHRS,13,# Julia H. Indik, MD, PhD, FHRS,14 Jodie Ingles, MPH, PhD, FHRS,15, Cynthia A. James, ScM, PhD, CGC,6 Roy M. John, MBBS, PhD, CCDS, FHRS,16 Daniel P. Judge, MD,17,†† Roberto Keegan, MD,18,19,‡‡ Andrew D. Krahn, MD, FHRS,20 Mark S. Link, MD, FHRS,21,xx Frank I. Marcus, MD,14 Christopher J. McLeod, MBChB, PhD, FHRS,5 Luisa Mestroni, MD,22 Silvia G. Priori, MD, PhD,23,24,25 Jeffrey E. Saffitz, MD, PhD,26 Shubhayan Sanatani, MD, FHRS, CCDS,27,{{ Wataru Shimizu, MD, PhD,28,## J. Peter van Tintelen, MD, PhD,29,30 Arthur A.M. Wilde, MD, PhD,24,29,31 Wojciech Zareba, MD, PhD32 Document Reviewers: Peter Aziz, MD; Mina K. Chung, MD, FHRS; Shriprasad Deshpande, MBBS, MS; Susan Etheridge, MD, FACC; Marcio Jansen de Oliveira Figueiredo, MD; John Gorcsan III, MD, FASE; Denise Tessariol Hachul, MD; Robert Hamilton, MD; Richard Hauer, MD; Minoru Horie, MD, PhD; Yuki Iwasaki, MD, PhD; Rajesh Janardhanan, MD, MRCP, FACC, FASE; Neal Lakdawala, MD; Andrew P. Landstrom, MD, PhD; Andrew Martin, MBChB, CCDS; Ana Morales, MS; Brittney Murray, MS; Santiago Nava Townsend, MD; Stuart Dean Russell, MD; Frederic Sacher, MD, PhD; Mauricio Scanavacca, MD; Kavita Sharma, MD; Yoshihide Takahashi, MD; Harikrishna Tandri, MBBS, MD; Gaurav A. Upadhyay, MD, FACC; Christian Wolpert, MD From the 1Le Bonheur Children’s Hospital, Memphis, Tennessee, 2University of Tennessee Health Science Center, Memphis, Tennessee, 3University College London, Institute of Cardiovascular Science, London, United Kingdom, 4Boston Children’s Hospital, Boston, Massachusetts, 5Mayo Clinic, Rochester, Minnesota, 6Johns Hopkins University, Baltimore, Maryland, 7Universidade de S~ ao Paulo, Instituto do Corac ¸~ ao HCFMUSP, S~ ao Paulo, Brazil, 8Duke University Medical Center, Durham, North Carolina, 9Nancy University Hospital, Vandoeuvre-l es-Nancy, France, 10University of California Los Angeles, Los Angeles, California, 11Cleveland Clinic, Cleveland, Ohio, 12University of Pittsburgh Medical Center, Pittsburgh, Pennsylvania, 13Fu Wai Hospital, Beijing, China, 14University of Arizona, Sarver Heart Center, Tucson, Arizona, 15Agnes Ginges Centre for Molecular Cardiology at Centenary Institute, The University of Sydney, Sydney, Australia, 16Vanderbilt University Medical Center, Nashville, Tennessee, 17Medical University of South Carolina, Charleston, South Carolina, 18Hospital Privado Del Sur, Buenos Aires, Argentina, 19Hospital Espa~ nol, Bahia Blanca, Argentina, 20The University of British Columbia, Vancouver, Canada, 21UT Southwestern Medical Center, Dallas, Texas, 22University of Colorado Anschutz Medical Campus, Aurora, Colorado, 23University of Pavia, Pavia, Italy, 24European Reference Network for Rare and Low Prevalence Complex Diseases of the Heart (ERN GUARD-Heart), 25ICS Maugeri, IRCCS, Pavia, Italy, 26Beth Israel Deaconess Medical Center, Boston, Massachusetts, 27Children’s Heart Center, Vancouver, Canada, 28Department of 1547-5271/$-see front matter © 2019 Heart Rhythm Society. All rights reserved. Cardiovascular Medicine, Nippon Medical School, Tokyo, Japan, 29University of Amsterdam, Academic Medical Center, Amsterdam, the Netherlands, 30Utrecht University Medical Center Utrecht, University of Utrecht, Department of Genetics, Utrecht, the Netherlands, 31Department of Medicine, Columbia University Irving Medical Center, New York, New York, and 32University of Rochester Medical Center, Rochester, New York. Representative of the American College of Cardiology (ACC) †Representative of the Sociedade Brasileira de Arritmias Cardíacas (SOBRAC) ‡Representative of the European Heart Rhythm Association (EHRA) xRepresentative of the International Society for Heart & Lung Transplantation (ISHLT) {Representative of the American Society of Echocardiography (ASE) #Representative of the Asia Pacific Heart Rhythm Society (APHRS) Representative of the National Society of Genetic Counselors (NSGC) ††Representative of the Heart Failure Society of America (HFSA) ‡‡Representative of the Latin American Heart Rhythm Society (LAHRS) xxRepresentative of the American Heart Association (AHA) {{Representative of the Pediatric & Congenital Electrophysiology Society (PACES) ##Representative of the Japanese Heart Rhythm Society (JHRS) Abstract Arrhythmogenic cardiomyopathy (ACM) is an arrhythmogenic disor-der of the myocardium not secondary to ischemic, hypertensive, or valvular heart disease. ACM incorporates a broad spectrum of genetic, systemic, infectious, and inflammatory disorders. This designation includes, but is not limited to, arrhythmogenic right/left ventricular cardiomyopathy, cardiac amyloidosis, sarcoidosis, Chagas disease, and left ventricular noncompaction. The ACM phenotype overlaps with other cardiomyopathies, particularly dilated cardiomyopathy with arrhythmia presentation that may be associated with ventricular dilatation and/or impaired systolic function. This expert consensus statement provides the clinician with guidance on evaluation and management of ACM and includes clinically relevant information on genetics and disease mechanisms. PICO questions were utilized to evaluate contemporary evidence and provide clinical guidance related to exercise in arrhythmogenic right ventricular cardiomyopa-thy. Recommendations were developed and approved by an expert writing group, after a systematic literature search with evidence ta-bles, and discussion of their own clinical experience, to present the current knowledge in the field. Each recommendation is presented us-ing the Class of Recommendation and Level of Evidence system formulated by the American College of Cardiology and the American Heart Association and is accompanied by references and explanatory text to provide essential context. The ongoing recognition of the ge-netic basis of ACM provides the opportunity to examine the diverse triggers and potential common pathway for the development of dis-ease and arrhythmia. KEYWORDS Arrhythmogenic cardiomyopathy; Arrhythmogenic left ven-tricular cardiomyopathy; Arrhythmogenic right ventricular cardiomyopathy; Cascade family screening; Catheter ablation; Diagnosis of arrhythmogenic cardiomyopathy; Disease mechanisms; Electrophysiology; Exercise restric-tion; Genetic testing; Genetic variants; ICD decisions; Left ventricular noncompaction; Risk stratification; Treatment of arrhythmogenic cardiomyopathy. ABBREVIATIONS ACE 5 angiotensin-converting enzyme; ACM 5 arrhythmogenic cardiomyopathy; AJ 5 adherens junction; AL 5 amyloid light-chain; ALVC 5 arrhythmogenic left ventricular cardiomyopathy; AP 5 action potential; ARB 5 angiotensin receptor blocker; ARVC 5 arrhythmogenic right ventricular cardio-myopathy; AV 5 atrioventricular; BrS 5 Brugada syndrome; CMR 5 cardiac magnetic resonance imaging; COR 5 Class of Recommenda-tion; CPVT 5 catecholaminergic polymorphic ventricular tachy-cardia; CRBBB 5 complete right bundle branch block; CT 5 computed tomography; DCM 5 dilated cardiomyopathy; ECG 5 electrocardiogram; EPS 5 electrophysiology study; FAO 5 fatty-acid oxidation; GJ 5 gap junction; HCM 5 hypertrophic car-diomyopathy; HF 5 heart failure; HFmrEF 5 heart failure with mid-range ejection fraction; HFrEF 5 heart failure with reduced ejection fraction; HR 5 hazard ratio; ICCD 5 isolated cardiac con-duction disease; ICD 5 implantable cardioverter defibrillator; ID 5 intercalated disc; IF 5 intermediate filament; JUP 5 junction plakoglobin; KSS 5 Kearns-Sayre syndrome; LBBB 5 left bundle branch block; LDB3 5 LIM domain binding 3; LGE 5 late gadolin-ium enhancement; LM 5 lateral membrane; LOE 5 Level of Evi-dence; LQT1 5 long QT syndrome type 1; LQT3 5 long QT syndrome type 3; LQTS 5 long QT syndrome; LTCC 5 L-type calcium channel; LV 5 left ventricle; LVEF 5 left ventricular ejection frac-tion; LVNC 5 left ventricular noncompaction; MELAS 5 mitochon-drial encephalopathy, lactic acidosis, and stroke; MERRF 5 myoclonic epilepsy with ragged red fibers; MET 5 metabolic equivalent; MLP 5 muscle LIM protein; MRI 5 magnetic resonance imaging; NCX 5 Na1/Ca21 exchanger; NGS 5 next-generation sequencing; NSVT 5 nonsustained ventric-ular tachycardia; NYHA 5 New York Heart Association; PFHB1 5 progressive familial heart block type I; PVC 5 premature ventricular contraction; RBBB 5 right bundle branch block; RCM 5 restrictive cardiomyopathy; RV 5 right ventricle; RVEF 5 right ventricular ejection fraction; RVOT 5 right ventricular outflow tract; SCD 5 sudden cardiac death; SCN5A 5 sodium voltage-gated channel alpha subunit 5; SQTS 5 short QT syndrome; SR 5 sarcoplasmic reticulum; TAD 5 terminal activation duration; TRPM4 5 transient receptor potential melastatin 4; TWI 5 T wave inversion; VF 5 ventricular fibrillation; VFL 5 ventricular flutter; VT 5 ventricular tachycardia; VUS 5 variant of uncertain signifi-cance; WES 5 whole exome sequencing; WGS 5 whole genome sequencing; ZASP 5 Z-band alternatively spliced PDZ-motif (Heart Rhythm 2019;16:e301–e372) e302 Heart Rhythm, Vol 16, No 11, November 2019 TABLE OF CONTENTS Section 1 Introduction ............................................ e304 Section 2 Arrhythmogenic cardiomyopathy .......... e304 2.1. Arrhythmogenic cardiomyopathy .............. e304 2.2. Arrhythmogenic right ventricular cardiomyopathy .......................................... e306 2.3. Arrhythmogenic left ventricular cardiomyopathy .......................................... e308 2.4. Final common pathways in arrhythmogenic cardiomyopathy .......................................... e308 Section 3 Diagnosis and treatment of arrhythmogenic cardiomyopathy ............................ e309 3.1. Diagnosis of arrhythmogenic cardiomyopathy .......................................... e309 3.2. Evaluation overview .................................. e309 3.3. Family history ............................................ e310 3.4. Electrocardiogram features in arrhythmogenic right ventricular cardiomyopathy .......................................... e310 3.4.1. Repolarization abnormalities .......... e310 3.4.2. Depolarization and conduction abnormalities ................................... e312 3.4.3. Electrocardiogram abnormalities in arrhythmogenic cardiomyopathies other than arrhythmogenic right ventricular cardiomyopathy ............ e313 3.4.4. Ambulatory electrocardiogram monitoring ....................................... e313 3.4.5. Signal-averaged electrocardiogram . e313 3.5. Cardiac imaging ......................................... e313 3.6. Electrophysiology testing .......................... e314 3.7. Endomyocardial biopsy ............................. e314 3.8. Genetic testing ........................................... e314 3.8.1. Genetic testing methods ................. e314 3.8.2. Variant and gene interpretation ...... e315 3.8.3. Which test to use ............................ e315 3.8.4. Advantages and disadvantages of various methods .............................. e316 3.8.5. Who to study .................................. e317 3.8.6. The role of genetic testing in arrhythmogenic cardiomyopathies .. e317 3.8.7. The use of a genetic test in risk stratification and management ........ e318 3.8.8. Limitations of genetic testing ......... e319 3.9. Cascade family screening .......................... e319 3.9.1. Cascade family screening: screening recommendations in children and adults ............................................... e319 3.10. Risk stratification and implantable cardioverter defibrillator decisions .......... e322 3.11. Management of ventricular arrhythmia and dysfunction ....................................... e325 3.11.1. Medications including angiotensin-converting enzyme inhibitors, beta-blockers, and antiarrhythmic drugs ................... e325 3.11.2. Role of catheter ablation ............ e328 3.12. Prevention of disease progression ........... e329 3.12.1. Clinical exercise questions to direct a literature search ............. e330 3.12.2. Exercise definitions ..................... e331 3.12.3. Exercise increases age-related penetrance among genotype-positive relatives ......................... e332 3.12.4. Exercise and other arrhythmogenic cardiomyopathies ........................ e333 Section 4 Disease mechanisms ............................... e334 4.1. Desmosomal defects .................................. e334 4.2. Ion channel defects .................................... e336 4.2.1. SCN5A ............................................ e336 4.3. Cytoskeletal defects ................................... e337 4.3.1. Myofibrillar cytoskeleton ................ e338 4.3.2. ZASP/LDB3 .................................... e339 4.3.3. a-actinin-2 ....................................... e340 4.3.4. Filamin-C ........................................ e340 4.3.5. Extramyofibrillar cytoskeleton ........ e341 4.4. Sarcomeric defects ..................................... e342 4.5. Metabolic defects ....................................... e342 4.6. Mitochondrial forms .................................. e343 4.6.1. Kearns-Sayre syndrome .................. e344 4.7. Histiocytoid (oncocytic) cardiomyopathy . e344 Section 5 Other disorders ....................................... e344 5.1. Infiltrative cardiomyopathies: amyloidosis e344 5.2. Brugada syndrome ..................................... e347 5.3. Potassium channels: KCNQ1, KCNH2, and TRMP4 ...................................................... e347 5.3.1. KCNQ1 ........................................... e347 5.3.2. KCNH2 ........................................... e348 5.3.3. TRPM4 ........................................... e348 5.4. Phospholamban .......................................... e349 5.5. Left ventricular noncompaction ................. e350 5.5.1. Diagnostic methods and criteria ..... e352 5.5.2. Treatment ........................................ e352 Section 6 Future directions and research recommendations .................................................... e354 Developed in collaboration with and endorsed by the American College of Cardiology (ACC), the American Heart Association (AHA), the American Society of Echocardiography (ASE), the Asia Pacific Heart Rhythm Society (APHRS), the European Heart Rhythm Association (EHRA), the Heart Failure Society of America (HFSA), the International Society for Heart & Lung Transplantation (ISHLT), the Japanese Heart Rhythm Society (JHRS), the Latin American Heart Rhythm Society (LAHRS), the National Society of Genetic Counselors (NSGC), the Pedi-atric & Congenital Electrophysiology Society (PACES), and the Sociedade Brasileira de Arritmias Cardíacas (SOBRAC). For copies of this document, please contact the Elsevier Inc. Reprint Department (reprints@elsevier. com). Permissions: Multiple copies, modification, alteration, enhance-ment, and/or distribution of this document are not permitted without the express permission of the Heart Rhythm Society. Instructions for obtaining permission are located at policies/copyright/permissions. Correspondence: Heart Rhythm Society, 1325 G Street NW, Suite 400, Washington, DC 20005. E-mail address: clinicaldocs@hrsonline.org. Towbin et al Evaluation, Risk Stratification, and Management of ACM e303 Appendix. Supplementary Data ............................. e355 Section 1 Introduction This international consensus statement is intended to help cardiologists and other health care professionals involved in the care of adult and pediatric patients with arrhythmo-genic cardiomyopathy (ACM), which encompasses a broad range of disorders, by providing recommendations for evalu-ation and management and supporting shared decision mak-ing between health care providers and patients in a document format that is also useful at the point of care. This consensus statement was written by experts in the field chosen by the Heart Rhythm Society (HRS) and collaborating organizations. Twelve societies collaborated with the HRS in this effort: the American College of Cardiology (ACC), the American Heart Association (AHA), the Asia Pacific Heart RhythmSociety(APHRS), theAmericanSociety ofEchocardi-ography (ASE), the European Heart Rhythm Association (EHRA), the Heart Failure Society of America (HFSA), the In-ternational Society for Heart & Lung Transplantation (ISHLT), theJapaneseHeartRhythmSociety(JHRS),theLatinAmerican Heart Rhythm Society (LAHRS), the National Society of Genetic Counselors (NSGC), the Pediatric & Congenital Elec-trophysiology Society (PACES), and the Sociedade Brasileira de Arritmias Cardíacas (SOBRAC). In accordance with the policies of the HRS, disclosure of any relationships with industry and other entities was required from the writing committee members (Appendix 1) and from all peer reviewers (Appendix 2). Of the 30 com-mittee members, 16 (53%) had no relevant relationships with industry, including the document Chair and Vice-Chair. Sec-tions that contain recommendations were written by commit-tee members who were free of any relevant relationships with industry. The writing committee reviewed evidence gathered by electronic literature searches (MEDLINE/PubMed, Em-base, Cochrane Library). No specific year was chosen for the oldest literature. Search terms included but were not limited to the following: arrhythmogenic right ventric-ular cardiomyopathy, arrhythmogenic cardiomyopathy, dilated cardiomyopathy, lamin, ventricular tachycardia, ventricular arrhythmia, Fabry, noncompaction, phospho-lamban, cardiac amyloidosis, amyloid heart, heart failure, right ventricular failure, ARVC therapy, ARVC amiodar-one, ARVC sotalol, ARVC flecainide, ablation, family screening, family risk, family member, relative, and elec-trocardiography. Evidence tables were constructed to describe the evidence, including study type, with observa-tional cohorts representing the predominant form of evi-dence. Case reports were not used to support recommendations. This document also used a PICO question to focus the search for evidence in Section 3.12. A member of the writing committee, free of relationships with industry and educated in evidence-based medicine and clinical practice document methodology, oversaw the evaluation of the evidence and determination of the Level of Evidence (LOE) for each recommendation. Recommendations were formulated using the Class of Recommendation (COR) and LOE system formulated by the ACC and AHA (Figure 1). This system provides a trans-parent mechanism to judge benefit relative to risk using a classification scheme (I, IIa, IIb, and III), supported by ev-idence quality and quantity using an LOE rating (A, B-R, B-NR, C-LD, C-EO); all recommendations are listed with a COR and LOE rating. For clarity and usefulness, each recommendation contains the specific references from the literature used to justify the LOE rating, which are also summarized in the evidence tables (Appendix 3). Recom-mendations based solely on the writing committee opinion are given an LOE rating of C-EO. Each recommendation is accompanied by explanatory text or knowledge “byte.” Flow diagrams and appropriate tables provide a summary of the recommendations, intended to assist health care providers at the point of care. A comprehensive discussion (Section 4) is presented to further the understanding of molecular mecha-nisms underlying ventricular dysfunction and arrhythmogene-sis in ACM. For additional information on HRS clinical practice document development, please refer to the HRS meth-odology manual.1 Clinical practice documents that are rele-vant to this document are listed in Table 1. To reach consensus, the writing committee members participated in surveys, requiring a predefined threshold of 75% approval for each recommendation, with a quorum of two-thirds of the writing committee. An initial failure to reach consensus was resolved by subsequent discussions, re-visions as needed, and re-voting. The mean consensus over all recommendations was 94%. An industry forum was conducted to achieve a struc-tured dialogue to address technical questions and gain a better understanding of future directions and challenges. Because of the potential for actual or perceived bias, HRS imposes strict parameters for information sharing to ensure that industry participates only in an advisory capac-ity and has no role in either the writing or review of the document. This consensus statement underwent internal review by the HRS Scientific and Clinical Documents Committee and was approved by the writing committee. Public comment on recommendations was obtained. The document underwent external peer review by reviewers appointed by HRS and each of the collaborating societies, and revisions were made by the chairs. Section 2 Arrhythmogenic cardiomyopathy 2.1. Arrhythmogenic cardiomyopathy ACM is defined as an arrhythmogenic heart muscle disorder not explained by ischemic, hypertensive, or valvular heart References ........................................................... e355 Appendix 1 Author Disclosure Table .................. e366 Appendix 2 Peer Reviewer Disclosure Table ........... e370 e304 Heart Rhythm, Vol 16, No 11, November 2019 disease. ACM may present clinically as symptoms or docu-mentation of atrial fibrillation, conduction disease, and/or right ventricular (RV) and/or left ventricular (LV) arrhythmia (Figure 2). The etiology may be part of a systemic disorder (eg, sarcoidosis, amyloidosis), an apparently isolated cardiac ab-normality (eg, myocarditis), an infection (eg, Chagas dis-ease), or be genetic (eg, desmosomal arrhythmogenic right ventricular cardiomyopathy [ARVC] or arrhythmogenic left ventricular cardiomyopathy [ALVC], lamin A/C, filamin-C, phospholamban) with particular phenotypic (cardiac, cuta-neous, immunologic) features (Figure 3). Ion channel dis-ease, which can also cause ACM, is considered in Section 4 Disease Mechanisms and is discussed in other clinical prac-tice documents. Similarly, sarcoidosis and Chagas disease, which are important causes of ACM, are discussed only briefly because they are the subject of other clinical practice documents. In contrast, the arrhythmic management of Figure 1 ACC/AHA Recommendation System: Applying Class of Recommendation and Level of Evidence to Clinical Strategies, Interventions, Treatments, and Diagnostic Testing in Patient Care. Reproduced with permission of the American College of Cardiology and the American Heart Association.2 Towbin et al Evaluation, Risk Stratification, and Management of ACM e305 patients with amyloidosis is comprehensively discussed in Section 5.1, since this topic has not been adequately ad-dressed in previous clinical practice documents. A distinguishing feature of ACM is the clinical presenta-tion with documented and/or symptomatic arrhythmia. The ACM phenotype can overlap with other cardiomyopathies, particularly dilated cardiomyopathy (DCM), in which the arrhythmia presentation may be associated with moderate to severe ventricular dilatation and/or impaired systolic func-tion (eg, ARVC or ALVC caused by DSP, FLNC, SCN5A or PLN variants) (Figure 3 and Figure 4). As with all forms of genetically based cardiovascular disease, the mechanisms responsible for the phenotype that develops rely on dysfunc-tion of final common protein pathways. For instance, DCM is typically caused by variants in genes encoding structural pro-teins such as cytoskeletal and sarcomeric proteins and, in this case, usually presents with features of heart failure (HF). Ar-rhythmias, which are most commonly caused by variants in genes encoding ion channels when isolated, may also be a late manifestation in DCM or other forms of cardiomyopathy. These “final common pathways” can interact as overlapping pathways through protein-protein binding and, in these cases, can provide complex phenotypes, such as DCM with signif-icant arrhythmia potential. This distinction between an arrhythmic vs a HF presentation in patients who fulfill current DCM diagnostic criteria is important because the genetic ba-sis, sudden death risk, prognosis, and focus of management are different in these two scenarios. Although rare, ACM can also overlap with hypertrophic cardiomyopathy (HCM; final common pathway, the sarcomere), restrictive cardiomy-opathy (RCM; final common pathway, the sarcomere), or LV noncompaction (LVNC; final common pathway, the sarco-mere and cytoskeleton). Troponin T variants, unlike other sarcomeric disease-causing genes, may present with cardiac arrest or sudden death despite mild or even absent LV hyper-trophy, whereas troponin I variants may cause a restrictive phenotype in which the dominant clinical presentation is atrial fibrillation.13–15 Nonsarcomeric HCM (eg, Anderson-Fabry disease), caused by alpha-galactosidase A variants, may also initially present with arrhythmia, though not in the absence of diagnostic phenotypic features. Clinical evaluation to diagnose and manage ACM in adults and children should consider genetic and nongenetic causes with an assessment of electrocardiographic and structural ab-normalitiesandarrhythmicrisk.Thepedigreeevaluationshould include a 3-generation family tree with an emphasis on prema-ture cardiovascular events (eg, sudden death, HF) and associ-ated cardiac (eg, arrhythmias, conduction disease) and noncardiac (eg, skeletalmyopathy, renal failure,auditory/visual defects) phenotypes. Mutation analysis, endomyocardial bi-opsy, and electrophysiology studies (EPSs) are indicated in the particular clinical circumstances discussed below. 2.2. Arrhythmogenic right ventricular cardiomyopathy ARVC is the best characterized of the ACMs, with early clinical reports16–18 leading to internationally agreed-upon diagnostic10,18,19 and management guidelines.12 The pre-dominant RV involvement with left bundle branch block (LBBB) ventricular tachycardia (VT) and fibrous or fibro-fatty replacement of RV myocardium is distinct from the LV predominance of most cardiac conditions and other ACMs. ARVC is most often familial, with autosomal domi-nant inheritance. Studies of one of the uncommon recessive forms20,21 with a cardiocutaneous phenotype led to the identification of the first disease-causing gene22 and the Table 1 Relevant clinical practice documents Title Organization Publication year 2017 AHA/ACC/HRS Guideline for Management of Patients with Ventricular Arrhythmias and the Prevention of Sudden Cardiac Death3 AHA, ACC, HRS 2017 ACC/AHA/HRS 2008 Guidelines for Device-Based Therapy of Cardiac Rhythm Abnormalities4 ACC, AHA, HRS 2008 HRS/EHRA Expert Consensus Statement on the State of Genetic Testing for the Channelopathies and Cardiomyopathies5 HRS, EHRA 2011 HRS/EHRA/APHRS Expert Consensus Statement on the Diagnosis and Management of Patients with Inherited Primary Arrhythmia Syndromes6 HRS, EHRA, APHRS 2013 2016 ACC/AHA/HFSA Focused Update on New Pharmacological Therapy for Heart Failure: An Update of the 2013 ACCF/AHA Guideline for the Management of Heart Failure7 ACC, AHA, HFSA 2016 2013 ACCF/AHA Guideline for the Management of Heart Failure8 ACC, AHA 2013 2016 ESC Guidelines for the Diagnosis and Treatment of Acute and Chronic Heart Failure9 ESC 2016 Marcus et al. Diagnosis of Arrhythmogenic Right Ventricular Cardiomyopathy/Dysplasia: Proposed Modification of the Task Force Criteria10 NA 2010 Hershberger et al. Genetic Evaluation of Cardiomyopathy—A Heart Failure Society of America Practice Guideline11 HFSA 2018 Corrado et al. Treatment of Arrhythmogenic Right Ventricular Cardiomyopathy/Dysplasia: An International Task Force Consensus Statement12 NA 2015 e306 Heart Rhythm, Vol 16, No 11, November 2019 Patient has ACM Ventricular dysfunction Is arrhythmia in the clinical presentation? Yes Is another disorder present? Other systemic disorders to consider: Sarcoidosis Myocarditis Chagas disease Amyloidosis Patient does not have ACM No Yes Other acquired conditions and cardiomyopathies to consider: Ischemic heart disease Hypertensive heart disease Valvular heart disease Dilated cardiomyopathy Not explained by ischemic, hypertensive, or valvular heart disease Arrhythmia includes conduction disease, atrial arrhythmias, ventricular arrhythmias Figure 2 Algorithm to consider the presence of an arrhythmogenic cardiomyopathy (ACM). Genotype Phenotype Desmosomal ARVC/ALVC, hair/skin abnormalies Lamin A/C Conducon disease, ventricular arrhythmia/sudden death, DCM, lipodystrophy, muscular dystrophy SCN5A Brugada syndrome, conducon disease, AF, VT/VF, DCM PLN Low-voltage ECG, VT/VF, DCM, HCM, ARVC TMEM43 Sudden death M >F, DCM FLNC Sudden death, DCM RBM20 DCM, AF; ventricular arrhythmia/sudden death uncommon as an early feature Desmin Skeletal myopathy, DCM; arrhythmia uncommon as an early feature Figure 3 Arrhythmogenic cardiomyopathy (ACM): phenotypes associated with the most common genetic causes of ACM. AF 5 atrial fibrillation; ALVC 5 arrhythmogenic left ventricular cardiomyopathy; ARVC 5 arrhythmogenic right ventricular cardiomyopathy; DCM 5 dilated cardiomyopathy; ECG 5 electro-cardiogram; F 5 female; FLNC 5 filamin-C; M 5 male; HCM 5 hypertrophic cardiomyopathy; PLN 5 phospholamban; RBM20 5 RNA binding motif protein 20; VF 5 ventricular fibrillation; VT 5 ventricular tachycardia; SCN5A 5 sodium voltage-gated channel alpha subunit 5; TMEM43 5 transmembrane protein 43. Towbin et al Evaluation, Risk Stratification, and Management of ACM e307 recognition that most ARVC is caused by variants in one of several desmosomal genes (see Section 3.8 Genetic Testing, below).23–26 Autosomal dominant inheritance predominates and most patients will have one or more pathogenic variants in genes encoding desmosomal proteins. The disease is therefore considered to have desmosome dysfunction as its final com-mon pathway; in other words, ARVC is a disease of the desmosome or desmosomopathy.27–29 However, there are disease-causing genes that cause “classic” ARVC that do not encode for desmosomal proteins. In most of these cases, the proteins encoded by the mutated gene are either binding partners of desmosomal proteins or proteins whose function is disturbed due to desmosomal protein dysfunction or vice versa, such as ion channels. Recently, pathogenic gene vari-ants have been identified in patients and families, which sug-gests that more than just the desmosome is involved, but in fact the intercalated disc (ID) as a whole is involved.27–29 LV ACM would similarly follow this “final common pathway” model.27–29 2.3. Arrhythmogenic left ventricular cardiomyopathy The distinctive phenotypic presentation of ARVC with LBBB VT associated with RV structural abnormalities overshadowed recognition that most patients with ARVC develop LV involvement, especially when evaluated with sensitive imaging modalities such as cardiac magnetic resonance imaging (CMR) (biventricular ACM). With the identification of desmosomal disease-causing variants, indi-viduals and families with predominantly LV arrhythmia and structural abnormalities were recognized30,31, as were patients with nondesmosomal arrhythmia-associated variants (eg, lamin A/C,32 phospholamban,33 filamin-C34 who had ACM with predominantly left (but also right) or biventricular phenotypes. The term “ALVC” has been proposed to recog-nize ACM of LV origin as distinct from ARVC and to rectify the relative lack of diagnostic and prognostic data, which contrasts with multiple international clinical practice documents10,12,19 generated for ARVC. In time, a better understanding will hopefully be gained of why particular variants (eg, desmosomal, lamin A/C [LMNA], sodium voltage-gated channel alpha subunit 5 [SCN5A], desmin [DES]) cause diverse phenotypes, and the clinical distinction between ARVC and ALVC will be viewed from a pathoge-netic rather than a phenotypic basis under an umbrella of ge-netic and acquired ACM. For the present, however, defining the diagnostic criteria and phenotypic features of ALVC in relation to outcome will be important in understanding the genetic basis and pathogenesis of the genetic and nongenetic conditions encompassed by ACM. 2.4. Final common pathways in arrhythmogenic cardiomyopathy The “final common pathway” hypothesis,35–37 which states that hereditary cardiovascular diseases with similar Ventricular Dysfunction in ACM (not due to systemic disorders) Right (ARVC) Right and Left (Biventricular) Left (ALVC) Common Pathways Genetic Variants Desmosome Intercalated Disc Ion Channel Cytoskeleton Sarcoplasmic Reticulum Sarcomere Ion Channel Mitochondria PKP2, JUP DSC2, DSG2, DSP, SCN5A PLN LMNA, DSP, FLNC, TMEM43, LDB3, Desmin, α-actinin, BAG3, NKX2-5, RBM20, SCN5A, KCNQ1, KCNH2, TRPM4, Mitochondrial Mutations Figure 4 Approach to understanding the common pathway and genetic variants in a patient with arrhythmogenic cardiomyopathy (ACM) according to the predom-inant ventricular dysfunction. See also Table 3. ALVC 5 arrhythmogenic left ventricular cardiomyopathy; ARVC 5 arrhythmogenic right ventricular cardiomyopathy; BAG3 5 BCL2 associated athanogene 3; DSC2 5 desmocollin-2; DSG2 5 desmoglein-2; DSP 5 desmoplakin; FLNC 5 filamin-C; JUP 5 junction plakoglobin; KCNH25 potassiumvoltage-gatedchannelsubfamilyHmember2;KCNQ15 potassiumvoltage-gatedchannelsubfamilyQmember1; LDB35 LIMdomainbinding 3; LMNA 5 lamin A/C; NKX2-5 5 NK2 homeobox 5; PKP2 5 plakophilin-2; PLN 5 phospholamban; RBM20 5 RNA binding motif protein 20; SCN5A 5 sodium voltage-gated channel alpha subunit 5; TMEM43 5 transmembrane protein 43; TRPM4 5 transient receptor potential melastatin 4. e308 Heart Rhythm, Vol 16, No 11, November 2019 phenotypes and genetic heterogeneity will occur due to abnormalities in genes encoding proteins of similar function or genes encoding proteins participating in a common pathway cascade, was initially described in 1998 in an attempt to direct gene discovery for various cardiovascular clinical phenotypes. Since its original description, the “final common pathway” hypothesis has been fairly predictive of the genes and proteins involved in phenotype development and, to a lesser extent, disease severity. This is seen in HCM (a disease of sarcomere function), arrhythmia disorders such as long QT syndrome (LQTS), Brugada syndrome (BrS), catecholaminergic polymorphic ventricular tachycardia (CPVT), and others (diseases of ion channel function), and Noonan syndrome (a disease of the Ras pathway). In the case of ARVC, the final common pathway appears to be a disturbance of the function of the desmosome and ID. However, ACM includes not only ARVC but also arrhythmogenic left-sided cardiomyopathies, which are currently less well stud-ied. However, data do exist that appear to demonstrate path-ways that overlap not only with those associated with ARVC, but also with sarcomere and ion channel pathways. Knowledge of the genes and their encoded proteins involved in the pathophysiology of these disorders, as well as of other proteins that interact with the final common pathway proteins, enables not only a better understanding of the clinical phenotypes that develop but also provides po-tential targets for current and future therapies (Figure 5 and Figure 18). Section 3 Diagnosis and treatment of arrhythmogenic cardiomyopathy 3.1. Diagnosis of arrhythmogenic cardiomyopathy The clinical presentation and diagnosis of the genetically deter-mined causes (eg, ARVC, lamin A/C, filamin-C, desmin) of ACM prior to puberty is uncommon. The diagnosis of ACM requires a high degree of clinical suspicion concomitant with diagnostic testing. Clinical perspectives of ACM arise primar-ily from experiences with patients who present with arrhyth-mias of RV origin, as well as sudden cardiac death (SCD).39 In the subset of patients with ARVC, individual clinical and diagnostic findings are individually neither highly specific nor sensitive, and diagnostic criteria have been established to standardize the diagnosis.10,19 The diagnosis of ARVC should be considered in the following: patients with exercise-related palpitations and/or syncope; survivors of sudden cardiac arrest (particularly during exercise); and individuals with frequent ventricular premature beats (.500 in 24 hours) and/ or VT of LBBB morphology in the absence of other heart dis-ease.10,19,39,40 In patients with suspected ACM who do not meet the diagnostic criteria for ARVC, the evaluation should be systematic to establish the diagnosis of other genetic and nongenetic forms of ACM, with repeated evaluations considered if the disease is strongly suspected. 3.2. Evaluation overview The underlying principles and clinical evaluations required for the diagnosis and management of ACM are similar in Figure 5 Cytoskeletal protein complexes within the cardiomyocyte costamere and Z-disc. Force is distributed externally from the costameres and internally throughout the myocyte by the Z-disc. Structural and signaling proteins within the costamere and Z-disc are shown. Many of these proteins have been implicated in mechano-sensing or sarcomere assembly. MYOZ2 5 myozenin 2; Cn 5 calcineurin; PDZ-3LIM 5 one-PDZ and three-LIM domain protein; PDZ-1LIM 5 one-PDZ and one-LIM domain protein; MLP/CRP3 5 muscle LIM protein/cysteine-rich protein 3; FHL2 5 four-and-a-half LIM protein 2; MAPRs 5 muscle ankyrin repeat proteins; MURFs 5 muscle-specific ring-finger proteins. Modified with permission of the American Physiological Society.38 Towbin et al Evaluation, Risk Stratification, and Management of ACM e309 ARVC and ALVC with respect to excluding acquired causes for the cardiomyopathy, ensuring a probable or definitive diagnosis and characterizing arrhythmia in relation to treat-ment and prognosis. Genetic causes of isolated or predomi-nantly RV arrhythmia and structural abnormalities are most commonly associated with desmosomal gene variants. There may be additional cutaneous phenotypes that manifest with autosomal dominant desmoplakin variants and are often florid in recessive desmosomal disease.20,23 The genetic causes of arrhythmia and structural disease of LV origin, however, typically manifest with additional cardiac (eg, conduction disease, atrial fibrillation) or systemic (eg, muscular dystrophy, lipodystrophy) phenotypes. Familial evaluation should therefore focus on arrhythmic disease, but also consider associated phenotypes. Several of the ALVC disease-causing gene variants have been reported in patients with LV or biventricular arrhythmia and LV dilata-tion and/or impaired function (eg, PLN, FLNC, LMNA, SCN5A). The diagnostic distinction here is from DCM and its genetic causes.28,41,42 In ACM, the clinical presentation in the proband and/or family members is typically with arrhythmia rather than HF, although both may be present in advanced disease. In patients with suspected ACM, the initial evaluation in-cludes clinical history, physical examination, detailed family history, 12-lead electrocardiogram (ECG), 2D echocardiogra-phy, ambulatory ECG monitoring, and CMR.10 Most patients with suspected ACM presenting with arrhythmia can be diag-nosed using noninvasive imaging and electrocardiographic assessment. If the initial testing is nondiagnostic, additional testing may include signal-averaged ECG, exercise ECG, pharmacologicaltesting with isoproterenol,43endomyocardial biopsy, and EPS. In a series of 48 older children (aged 13–15 years) presenting with possible ACM, a comprehensive clin-ical and genetic evaluation in the context of the adult Task Force Criteria for the diagnosis of ARVC revealed that 46% of the children had features consistent with a diagnosis of HCM, DCM, or ion channel disease, while 25% had features consistent with ARVC.44 The diagnosis of ALVC relies on documenting arrhythmia of isolated or predominantly LV origin in a proband or family member with cardiomyopathy (eg, arrhythmia) not caused by ischemic, valvular, or hypertensive heart disease. Impaired LV function and/or structural abnormalities as determined by 2D ECG and CMR can be absent, mild, or severe. Typically, arrhythmia is an early manifestation of disease. Internationally accepted diagnostic criteria analogous to those established for ARVC10 are required; however, an issue is the diagnosis of ACM in the presence of other potential causes for which coex-istence vs causality may be difficult to determine. Given the currently incomplete knowledge of the genetic basis of ACM, particularly of the ALVC and biventricular forms, the development of clinical diagnostic criteria is needed. After the original clinical description of RV dysplasia,17 it became clear that the diagnosis of this condition would be difficult to establish, particularly in the early stages of the disease when RV dilation or segmental dilatation is mild. Therefore, differentiating RV dysplasia from the normal heart could be equivocal. A task force was subsequently assembled to consider criteria for the diagnosis of arrhythmo-genic RV dysplasia/cardiomyopathy, the results of which were published in 1994.19 The task force concluded that there is no single gold standard for the diagnosis and that disease and the diagnosis require a combination of major and minor criteria encompassing structural, histological, electrocardio-graphic, arrhythmogenic, and genetic factors. LV disease was excluded from these criteria. The revision of the Task Force Criteria in 2010 included LV disease and added CMR for the diagnosis; the criteria are listed in Figure 6.10 Diagnostic criteria for ARVC in the pediatric population remain to be established since disease expression in children is uncommon. In a series of 16 patients, clinical presentation was life-threatening arrhythmia in 10 (median age of 14 years). In all 16 patients, LV and/or RV dysfunction was common and associated with the histopathological features of ARVC.45 Recently, a diagnostic and prognostic role has been proposed for the presence of anti-desmoglein-2 (DSG2) antibodies, which were present in patients with ARVC but not in controls; this work is potentially important and warrants confirmation in a larger number of patients and in other forms of ACM (eg, cardiac sarcoidosis).46,47 3.3. Family history A detailed family history covering at least 3 generations and the clinical evaluation of relatives are important in the diag-nostic assessment for ACM. In a patient with suspected ACM, a family history focusing on unexplained premature deaths, arrhythmias, and conduction disease may identify fa-milial disease. The presence of associated noncardiac pheno-types (eg, skeletal myopathy, other organ disease) can also provide clues to the underlying diagnosis for both genetic (eg, desmin or lamin myopathy) and nongenetic (eg, Chagas disease) causes. The 12-lead ECG is an important part of the diagnostic evaluation of patients with suspected ACM. Reports on the ECG findings of patients who meet the diagnostic criteria for ARVC have shown that the majority (.85%) demon-strate at least one characteristic ECG feature of ARVC but a normal ECG has been reported in up to 12%.49–51 ARVC is a progressive disease, which is reflected in the well-documented dynamic ECG changes associated with disease progression that have been demonstrated in several cohorts of patients with ARVC.49–54 Over time, the ECG may evolve with further prolongation of the S wave upstroke, increased QRS duration, and development of bundle branch block and precordial T wave inversion (TWI).53,54 3.4. Electrocardiogram features in arrhythmogenic right ventricular cardiomyopathy 3.4.1. Repolarization abnormalities The prevalence of TWI in leads V1–V3 (the characteristic ECG finding in patients with ARVC) varies from 19% to 67%,55–57 presumably due to the difference in study e310 Heart Rhythm, Vol 16, No 11, November 2019 Major Minor Regional RV akinesia, dyskinesia, or aneurysm and 1 of the following (end diastole): Regional RV akinesia, dyskinesia, or aneurysm and 1 of the following (end diastole): a) PLAX RVOT ≥32 mm (PLAX/BSA ≥19 mm/m2) a) PLAX RVOT ≥29 mm to <32 mm (PLAX/BSA ≥16 to <19 mm/m2) b) PSAX RVOT ≥36 mm (PSAX/BSA ≥21 mm/m2) b) PSAX RVOT ≥32 to <36 mm (PSAX/BSA ≥18 to <21 mm/m2) c) Fraconal area change ≤33% c) Fraconal area change >33 to ≤40% Regional RV akinesia or dyskinesia or dyssynchronous RV contracon and 1 of the following: Regional RV akinesia or dyskinesia or dyssynchronous RV contracon and 1 of the following: a) Rao RVEDV/BSA ≥110 mL/m2 (male), ≥100 mL/m2 (female) a) Rao RVEDV/BSA ≥100 to <110 mL/m2 (male), ≥90 to 100 mL/m2 (female) b) RVEF ≤40% b) RVEF >40 to ≤45% RV angiography Regional RV akinesia, dyskinesia, or aneurysm Endomyocardial biopsy showing fibrous replacement of the RV free wall myocardium in ≥1 sample, with or without fay replacement and with: Residual myocytes <60% by morphometric analysis (or <50% if esmated) Residual myocytes 60% to 75% by morphometric analysis (or 50% to 65% if esmated) I. Inverted T waves in leads V1 and V2 in individuals >14 years of age (in the absence of complete RBBB) or in V4, V5, or V6. II. Inverted T waves in leads V1, V2, V3, and V4 in individuals >14 years of age in the presence of complete RBBB I. Late potenals by SAECG in ≥1 of 3 parameters in the absence of QRS duraon of ≥110ms on the standard ECG: a) Filtered QRS duraon (fQRS) ≥114 ms b) Duraon of terminal QRS <40 μV (low-amplitude signal duraon) ≥38 ms c) Root-mean-square voltage of terminal 40 ms ≤20 μV II. Terminal acvaon duraon of QRS ≥55 ms measured from the nadir of the S wave to the end of the QRS, including R’ in V1, V2, or V3 in the absence of complete RBBB I. Nonsustained or sustained VT or RV oulow configuraon, LBBB morphology with inferior axis (posive QRS in II, III and aVF and negave in lead aVL) or of unknown axis II. >500 ventricular extrasystoles per 24 hours (Holter) I. ARVC confirmed in a first-degree relave who meets current Task Force Criteria I. History of ARVC in a first-degree relave in whom it is not possible or praccal to determine whether the family member meets current Task Force Criteria II. ARVC confirmed pathologically at autopsy or surgery in a first-degree relave II. Premature sudden death (<35 years of age) due to suspected ARVC in a first-degree relave III. Idenficaon of a pathogenec mutaon categorized as associated or probably associated with ARVC in the paent under evaluaon III. ARVC confirmed pathologically or by current Task Force Criteria in second-degree relave Modified Task Force Criteria for ARVC – Diagnostic Categories Major and Minor Criteria Definite: 2 major OR 1 major and 2 minor, OR 4 minor criteria from different categories Borderline: 1 major and 1 minor, OR 3 minor criteria from different categories Possible: 1 major, OR 2 minor criteria from different categories Global or regional dysfuncon and structural alteraons determined by echo, MRI, or RV angiography: Echo MRI Tissue characterizaon of wall Repolarizaon abnormalies ECG Inverted T waves in right precordial leads (V1, V2, and V3) or beyond in individuals >14 years of age (in the absence of complete RBBB QRS ≥120ms) ECG Epsilon wave (reproducible low-amplitude signals between end of QRS complex to onset of the T wave) in the right precordial leads (V1 to V3) Depolarizaon/conducon abnormalies Arrhythmias Nonsustained or sustained VT of LBBB with superior axis (negave or indeterminate QRS in leads II, III, and aVF and posive in lead aVL) Family history Figure 6 Modified Task Force Criteria for arrhythmogenic right ventricular cardiomyopathy (ARVC) showing the diagnostic categories for major and minor criteria according to the 2010 ARVC Task Force Criteria. These criteria are sensitive and specific in differentiating patients with ARVC from control populations but have not been adequately tested in relation to other arrhythmogenic cardiomyopathies (ACMs) with overlapping phenotypes (eg, cardiac sarcoidosis, myocar-ditis).48 BSA 5 body surface area; ECG 5 electrocardiogram; echo 5 echocardiogram; MRI 5 magnetic resonance imaging; PLAX 5 parasternal long-axis; PSAX 5 parasternal short-axis; RBBB 5 right bundle branch block; RV 5 right ventricular; RVEDV 5 right ventricular end-diastolic volume; RVEF 5 right ventricular ejection fraction; RVOT 5 right ventricular outflow tract; SAECG 5 signal-averaged electrocardiogram; VT 5 ventricular tachycardia. Towbin et al Evaluation, Risk Stratification, and Management of ACM e311 populations. TWI in the precordial leads beyond V2 is relatively common in Afro-Caribbean individuals,58 although it is rare (1% in females and 0.2% in males) in asymptomatic white individuals.59 TWI in patients younger than 14 years of age is more frequently observed in athletes (the so-called juvenile pattern).60 TWI is reasonably specific in patients older than 14 years of age and is considered a ma-jor diagnostic abnormality in ARVC. TWI in leads V1–V4 in individuals older than 14 years associated with complete right bundle branch block (CRBBB) is a minor criterion for the diagnosis of ARVC (Figure 7). The presence of TWI in lateral and/or inferior leads suggests LV involvement in pa-tients with ARVC (Figure 7).61 3.4.2. Depolarization and conduction abnormalities 3.4.2.1. Epsilon wave The epsilon wave is defined as a reproducible low-ampli-tude deflection located between the end of the QRS and the onset of the T wave in leads V1–V3 (Figure 7).10,56 Epsilon waves reflect delayed conduction in the RV (Figure 7). The prevalence of the epsilon wave in European and American registries varies from 0.9% to 25%.62 Elec-troanatomical mapping in patients with ARVC and an epsilon wave has shown that the timing of the epsilon wave on the surface ECG corresponded to activation of the basal (peri-tricuspid) RV region of the epicardium. Epsilon waves have been associated with severe conduction delay due to extensive endocardial and epicardial scarring at that site.63 Epsilon waves may reflect short-term arrhythmia risk but are of limited diagnostic utility because they are variable, have low sensitivity and specificity (seen in other condi-tions), and are dependent on ECG filter setting and magnification.54,62,64,65 3.4.2.2. Prolonged terminal activation duration Prolonged terminal activation duration (TAD) is measured from the nadir of the S wave to the end of all depolarization deflections (Figure 8). A TAD 55 ms in any of the V1–V3 leads in the absence of CRBBB is defined as a prolonged TAD.55,66 Prolonged TAD in leads V1–V3 has been reported to aid in differentiating ARVC from right ventricular outflow tract (RVOT)-VT.67 Prolonged TAD was confirmed in 30 of 42 patients with ARVC and in only 1 of 27 patients with idiopathic RVOT-VT.55 Moreover, TAD prolongation was the sole ECG abnormality in 4 of 7 gene-positive family members with ARVC,68 suggesting a role in the early recognition of “at-risk” individuals. IRBBB (QRS=110 ms) I II III aVR aVL aVF V1 V2 V3 V4 V5 V6 CRBBB (QRS=140 ms) I II III aVR aVL aVF V1 V2 V3 V4 V5 V6 T wave inversion Epsilon wave Figure 7 Representative 12-lead electrocardiogram (ECG) obtained from patients with arrhythmogenic right ventricular cardiomyopathy (ARVC) with incom-plete right bundle branch block (IRBBB) and complete right bundle branch block (CRBBB). QRS duration of IRBBB and CRBBB was 110 ms and 140 ms, respectively. The closed arrow indicates an epsilon wave, which was defined as low-amplitude deflection located between the end of the QRS and the onset of the T wave in leads V1–V3. The asterisk indicates the T wave inversion recorded in V1–V4 in patients with ARVC and IRBBB or CRBBB. e312 Heart Rhythm, Vol 16, No 11, November 2019 3.4.3. Electrocardiogram abnormalities in arrhythmogenic cardiomyopathies other than arrhythmogenic right ventricular cardiomyopathy Characterization of ECG findings in other ACMs is less detailed. The 12-lead ECG abnormalities include inverted T waves in leads I, aVL, and V4–V6; other repolarization ab-normalities; generalized low-voltage; increased QRS dura-tion; and isolated ectopy of LV origin. A completely normal ECG is uncommon. Variants in lamin A/C may be associated with progressive conduction disease (eg, PR pro-longation to atrioventricular [AV] block), variants in desmosomal genes and phospholamban with a low voltage ECG, and in filamin-C with minor repolarization changes only. In contrast to ARVC associated with desmo-somal variants, ECG abnormalities do not appear to be an early marker of disease in FLNC and desmin-related ACM. In ACMs associated with systemic disease, conduc-tion abnormalities are often early features (eg, sarcoidosis and Chagas disease).70,71 3.4.4. Ambulatory electrocardiogram monitoring Ambulatory ECG monitoring (24 to 48 hours) is important for characterizing all patients for whom the diagnosis of ACM is being considered. The presence of .500 ventricular prema-ture beats per 24-hour monitoring period is a minor diagnostic criterion for ARVC. In a study of 40 patients meeting ARVC Task Force Criteria who underwent ambulatory ECG moni-toring for an average of 159 hours, the average ventricular pre-mature beat count (per 24 hours) was 1091, with significant day-to-day variation. Despite this variation, the 24-hour burden was accurate 89.6% of the time to the correct grouping based on the revised Task Force Criteria.72,73 Documentation of ventricular arrhythmia with a morphology consistent with an LV origin is required for the diagnosis of ALVC. Precise definitions relating to charac-teristics VT and/or frequency of ventricular ectopy remain to be established for forms of ACM other than ARVC. The arrhythmia may be asymptomatic or associated with palpita-tions and/or impaired consciousness. 3.4.5. Signal-averaged electrocardiogram Although an abnormal signal-averaged ECG was a minor cri-terion in the 2010 Task Force Criteria, its use has declined largely due to its limited sensitivity and specificity, as well as its limited availability in many medical centers.10,74 3.5. Cardiac imaging Echocardiography and other noninvasive imaging modalities are important for evaluating patients suspected of ACM to assess structural and functional abnormalities and aid in diag-nosis.75,76 For many patients with suspected ACM, 2D echocardiog-raphy provides adequate visualization, enabling a systematic qualitative and quantitative assessment of ventricular func-tion and cavity dimensions, although there may be limitations when imaging the RV. Additional imaging with CMR pro-vides accurate measurements of volumes and also regional and global ventricular function.52 If CMR is contraindicated or not available, multidetector computed tomography (CT), RV angiography or radionuclide angiography are alterna-tives, but are currently less frequently used to assess ventric-ular function. The Task Force Criteria for ARVC include the presence of RV akinesia, dyskinesia, or aneurysms, together with an assessment of RVOT diameter and RV-fractional area change. Emerging echocardiographic parameters in the evaluation of patients with suspected or established ARVC include the measurement of tricuspid annular plane systolic excursion, RV basal diameter, global longitudinal strain (RV and LV), mechanical dispersion (RV and LV), and the use of 3D echocardiography.77,78 However, prospective studies are needed before these assessments are recommended for routine use. The 2010 Task Force Criteria for ARVC included CMR pa-rameters for RV global and regional dysfunction and RV vol-ume.10 The major criterion requires a regional RV wall motion abnormality and either increased RV end-diastolic volume (110 mL/m2 in men; 100 mL/m2 in women) or depressed right ventricular ejection fraction (RVEF) 40% (sensitivity: men 76%, women 68%; specificity: men 90%, women 98%). The CMR minor criterion also requires regional RV wall mo-tion abnormality with lesser degrees of RV enlargement (100 mL/m2 in men; 90 mL/m2 in women).10 The Task Force Criteria did not include CMR measures of RV myocar-dial fat or late gadolinium enhancement (LGE); however, these were not considered reliable measurements at the time the Task Force Criteria were developed (2010). The 2010 Task Force Criteria for ARVC do not define diagnostic criteria for LV involvement. If present, LGE is Figure 8 Terminal activation duration (TAD) is measured from the nadir of the S wave to the end of all depolarization deflections and is prolonged if 55 ms in any of the V1–V3 leads in the absence of complete right bundle branch block (CRBBB). Modified with permission of Oxford University Press on behalf of the European Society of Cardiology.69 Towbin et al Evaluation, Risk Stratification, and Management of ACM e313 typically found in a subepicardial or mid-wall distribution confined to the LV. LV dominant disease may be underdiag-nosed and attributed to other disorders.78 The potential of CMR to diagnose and risk stratify patients with ACM re-mains to be fully exploited. LV LGE has been identified as the sole imaging abnormality in patients with desmoplakin disease who have arrhythmia of LV origin and a normal ECG.31 In general, ECG abnormalities and arrhythmia are considered the earliest manifestations54,79; however, Sen-Chowdhry et al have also demonstrated that CMR may be sensitive to detecting early changes in ARVC. The role of CMR in the early diagnosis of ACM of nondesmosomal origin, for other genetic and acquired causes, warrants eval-uation.30,80 CMR expertise will be particularly important in the early diagnosis in the absence of ECG or other imaging abnormalities, given the risk that epicardial fat may be misinterpreted as delayed enhancement. LV structural and functional abnormalities will relate to particular genetic abnormalities and disease stage. Current genotype-phenotype relations are based on small data sets but suggest that ACM with clinically significant LV ar-rhythmias (eg, ALVC) may occur with “normal” to severely impaired LV function. Experience is greatest with lamin A/C disease, in which phenotypes include Emery-Dreifuss muscular dystrophy, generalized lipodys-trophy, DCM with HF, progressive conduction disease with late-onset DCM, and ALVC with or without signifi-cant LV impairment. ALVC caused by desmoplakin vari-ants can also be present with absent to severe LV dysfunction and may present with sudden death.81 Prelim-inary experience indicates that LGE on CMR can be pre-sent in the absence of LV dysfunction and may provide an early diagnostic feature when LV arrhythmia appears to have occurred in isolation.31 3.6. Electrophysiology testing Electrophysiology testing in ACM is often unnecessary for the diagnostic evaluation of patients with suspected ARVC or ALVC.12 Multicenter studies of patients with ARVC who received an implantable cardioverter defibrillator (ICD) have demonstrated the low predictive accuracy of elec-trophysiology testing in identifying those at risk of SCD and/ or life-threatening arrhythmia.82,83 The reported incidence of “life-saving” ICD discharges for treatment of fast VT/ ventricular fibrillation (VF) was not significantly different between those who were and those were not inducible. Corrado et al studied 106 patients with ARVC who received an ICD as primary prevention. The positive and negative predictive value for VT/VF inducibility was 35% and 70%, respectively.82 Electrophysiology testing, however, may be beneficial in patients with refractory ventricular arrhythmias for ablation consideration and differentiation from RVOT tachycardia. In this setting, electrophysiology testing with high-dose isoproterenol may help differentiate pa-tients with idiopathic VT or ventricular premature beats from those with ARVC.84 3.7. Endomyocardial biopsy Biopsy can be particularly useful in identifying systemic or inflammatory conditions that cause ACM (eg, sarcoidosis, myocarditis). However, endomyocardial biopsy (one of the Task Force Criteria for the diagnosis of ARVC) is invasive, lacks sensitivity and specificity, has low diagnostic yield, and, therefore, is now rarely performed in the initial diagnosis of ARVC. The characteristic histological feature is the pres-ence of transmural fibrofatty replacement of the RV myocar-dium, with major and minor criteria differentiated by degree of replacement (,60% vs 60%–75% myocytes by morpho-metric analysis).10 Diagnosis by biopsy is limited due to false negatives secondary to patchy involvement and sampling er-ror.85,86 Electroanatomical voltage mapping may improve the yield of endomyocardial biopsy by identifying areas of low voltage.87 Endomyocardial biopsy is associated with the risk of perforation, which is increased with RV free wall bi-opsy.85,88 Septal biopsy is generally not helpful because it is typically the least affected area of the myocardium in ARVC.86 Novel immunohistochemical analysis in patients with ARVC with desmosomal variants demonstrated altered plakoglobin and connexin43 signal as a marker of disease expression79,89–91; however, this has not proven to be of diagnostic utility. Sarcoidosis, for which treatment may include steroids, is important in the differential diagnosis of ARVC, but similar limitations with regard to sampling error and risk are present. Myocardial tissue obtained from postmortem and explanted hearts will have the value but not the limitations of endomyocardial biopsy and should be sought and examined whenever feasible. 3.8. Genetic testing General concepts on the role of genetic testing in the diag-nosis and management of ARVC and other ACMs are out-lined below, with recommendation flow diagrams shown in Figure 10 and Figure 11. 3.8.1. Genetic testing methods Several methods are available to identify the genetic basis of an ACM. Single genes are usually analyzed by Sanger sequencing, which has been proven to be a reliable technique to identify variants underlying genetic disease and has been the gold standard for decades. With increasing numbers of genes identified as underlying a specific cardiac disorder (ge-netic heterogeneity) and the fact that more than one gene and/ or variant (digenic inheritance or polygenic inheritance) can contribute to the disease phenotype,75,92 next-generation sequencing (NGS)–based methods enable the parallel sequencing of several targeted genes (a panel, eg, cardiomyopathy-panel) at the same time and at relatively low cost.93 In addition to these targeted NGS panels, sequencing of all protein coding genes (exome) of the human genome (whole exome sequencing [WES]) or even all DNA nucleotides (whole genome sequencing [WGS]) can be performed. e314 Heart Rhythm, Vol 16, No 11, November 2019 3.8.2. Variant and gene interpretation DNA sequences normally vary in the general population when comparing different individuals. However, even when they reside in bona fide ACM-susceptibility genes, not every DNA variant contributes to the disease.94 The ma-jor challenge is to correctly assign potential pathogenicity to these DNA variants. The American College of Medical Ge-netics and Genomics (ACMG) has published guidelines for interpreting genetic variants and proposed a classification based on the likelihood that a variant is related to disease (Table 2): pathogenic (class 5), likely pathogenic (class 4), uncertain significance (class 3), likely benign (class 2), or benign (class 1), in which a “likely pathogenic” and “likely benign” variant are used to mean greater than 90% certainty of a variant being either disease-causing or benign, respec-tively.95 The importance of correctly interpreting an identified var-iant’s pathogenicity is now considered the most critical step in genetic testing, especially considering that there appears to be substantial interreviewer disagreement over variant interpretation.97–100 Ethnicity information is essential for interpreting the data.101 Within the ACMs, examples of incorrect classification of variants in major ARVC-related genes have been published.102–106 Besides variant adjudication and the vexing variant of uncertain significance (VUS), many alleged and published ACM-susceptibility genes are being re-analyzed as to the strength of their disease-gene association and, over time, several pub-lished ACM-susceptibility genes may be demoted to genes of uncertain significance. Accordingly, when evaluating pa-tients suspected of an ACM, it is critical that the genetic tests conducted as part of the evaluation and the interpretation of the genetic test results be conducted by comprehensive teams with expertise in these disorders.107 Several genes have been implicated in ACM, with varying evidence strength (Table 3). The ClinGen Cardiovascular Clinical Domain Working Group for cardiovascular disor-ders is curating genes in relation to specific disorders.108 One of the first efforts in adapting the ACMG 2015 guide-lines for variant interpretation in genes related to cardioge-netic disease has recently been published, and this process is also underway for ACM.109 Depending on the reason for using the results of a genetic test, a certain amount of evidence for pathogenicity is neces-sary; for prenatal diagnostics or a pre-implantation genetic diagnosis, the evidence for pathogenicity must be strong, and only class 5 variants are used. For genetic cascade screening in family members, only class 4 and 5 variants are used; family members negative for the family’s class 5 variant are dismissed from regular cardiologic follow-up, whereas those relatives who test negative for a given family’s class 4 variant remain in the cardiogenetic clinics, albeit for longer follow-up intervals. The frequency and duration of follow-up for family members who are negative for a class 4 variant should be individualized at the discretion of the clin-ical team. Class 3 variants (ie, a VUS) should be deemed “nonactionable.” Given both incomplete penetrance and age-dependent penetrance, clinically unaffected family mem-bers should not be tested to determine their status for a class 3 variant found in the family unless additional evidence (such as various functional validation assays and/or demonstration of co-segregation among clinically affected family members) has been obtained that would prompt a variant promotion from an ambiguous class 3 variant (VUS) to a clinically actionable class 4 or class 5 variant. 3.8.3. Which test to use With the availability of NGS, the number of genes that can be studied in a single patient rapidly increases. However, the value of including a greater number of genes in a panel should be weighed against the drawback of adding genes that have insufficient evidence (or none) of being related to the patient’s disease or that account for only a small percent-age of the genotyped patients and are therefore more prone to errors in attributing the pathogenic role of the identified var-iants. Therefore, a list of core genes can focus on those with suf-ficient evidence to be disease-related. The ClinGen working group for cardiovascular disorders is responsible for review-ing clinical, genetic, and experimental data to establish the strength of evidence supporting gene-disease associations in heart disease. Gene curation for HCM was recently completed, and curation for ARVC and DCM is under-way.110,111 Until the official ClinGen-approved results of these gene curation efforts are available, we anticipate that the genes listed in Table 3 will likely be retained as ACM-susceptibility genes with sufficient evidence to merit their disease–gene association and will be useful in clinical prac-tice. These recognized genes should therefore be prioritized for patients and families with a clinical diagnosis of ACM or its subforms. If other genes are included in the analysis, identifying a pathogenic or likely pathogenic variant in one of the non-ACM related genes should not automatically or reflexively be considered an explanation for the patient’s ACM phenotype. In other words, a pathogenic or likely path-ogenic variant in KCNH2 (a gene in which P/LP variants cause abnormalities in the QTc without structural heart dis-ease) does not carry the same intrinsic probability of pathoge-nicity for ACM as a plakophilin-2 (PKP2) variant that has been graded as a pathogenic or likely pathogenic variant. Table 2 Classification of likelihood of pathogenicity of a variant Classification of variant Description Likelihood of being pathogenic Class 5 Pathogenic .95% Class 4 Likely pathogenic .90% Class 3 Variant of uncertain significance 10–90% Class 2 Likely benign ,10% Class 1 Benign ,5% Adapted from Plon et al.96 Towbin et al Evaluation, Risk Stratification, and Management of ACM e315 A recent viewpoint paper by the European Society of Car-diology (ESC) working group on myocardial and pericardial diseases emphasized that, in a diagnostic setting, only recog-nized genes associated with the condition should be investi-gated in patients who meet the diagnostic criteria of a specific cardiovascular condition. WES and WGS should be used for genetic diagnosis only if filtered against recognized disease-causing genes. The coverage should enable the identification of all exonic variants in these genes.107 3.8.4. Advantages and disadvantages of various methods The various techniques that can be used for genetic testing each have their own advantages and disadvantages, as sum-marized in Table 4. Coverage of the genomic regions of inter-est, the possibility of identifying large deletions/duplications, flexibility, and costs are important factors to consider when ordering a genetic test. Sanger sequencing is a reliable method with good coverage of the nucleotides that need to be studied, Table 4 Different methods for screening genes Target Coverage CNVs Flexibility Costs Sanger sequencing Single gene(s) 11 22 2 IE Targeted NGS panel Panel of genes of interest 1 1 2 1/2 WES filtered against genes of interest Set of genes of interest 1/2 1/2 1 1 WES All genes 1/2 1/2 1 1 WGS All genes and intronic sequences 1 1 1 11 CNVs 5 copy number variations; IE 5 inefficient (expensive for large amounts of sequencing but inexpensive for a small amount); NGS 5 next-generation sequencing; WES 5 whole exome sequencing; WGS 5 whole genome sequencing; 11 5 very high; 1 5 high; 1/2 5 intermediate; 25 low; 225 very low. Table 3 Minimum set of genes to be prioritized in arrhythmogenic cardiomyopathy (ACM) Gene Protein type Predominant type of mutation OR/EF100 Signal: Background94 Remarks References BAG3 Chaperone Truncating and missense NA NA Also causes myofibrillar myopathy 121 DES IF Truncating and missense NA NA Also causes myofibrillar myopathy 122 DSC2 Desm Truncating and missense NT 2.15 (EF 0.53); T 21.5 (EF 0.95) ns ns Rare 26 DSG2 Desm Truncating and missense NT 2.83 (EF 0.65) T 19.8 (EF 0.95) 2:1 (NT/T) Rarely recessive 123 DSP Desm Truncating and missense NT 2.1 (EF 0.52) T 89.9 (EF 0.99) ns ns Recessive: Carvajal syndrome 23,124 FLNC Actin crosslink Truncating and missense NA NA Also causes myofibrillar myopathy 34 JUP Desm Missense NT 7.8 (EF 0.87) T 28.1 (EF –) Recessive: Naxos syndrome 22,125 LDB3 Z-band Missense NA NA Cypher/ZASP 126 LMNA NE Truncating and missense NA NA AV block; CD 127 NKX2-5 Homeobox Truncating and missense NA NA AV block, CD, CHD 128 PKP2 Desm Truncating NT 1.3 (EF 0.23) T 484.7 (EF 1.0) 10:1 42:1 Large deletions 1-2% 24 PLN Ca Missense, nonsense, and deletion NA NA Predominantly R14del 33,129 RBM20 Splice factor Missense NA NA Mostly in exon 9 130 SCN5A Sodium channel Mostly missense NA NA Brugada, SND, CD 131 TMEM43 NE Missense NT 0.76 (EF–) T 13 (EF–) ns p.S358L disease-causing; also called LUMA 132 These genes have multiple lines of evidence indicating involvement in ACM and its subtypes (arrhythmogenic left ventricular cardiomyopathy, arrhythmo-genic right ventricular cardiomyopathy). OR/EF and Signal:Background data are largely derived from cohorts with western European ancestry, and other ethnic-ities can be different. AV 5 atrioventricular; BV 5 biventricular; Ca 5 calcium handling; CD 5 conduction delay; CHD 5 congenital heart disease; CPVT 5 catecholaminergic poly-morphic ventricular tachycardia; DES 5 desmin; Desm 5 desmosomal; DSC2 5 desmocollin-2; DSG2 5 desmoglein-2; EF 5 etiological fraction; IF 5 intermediate filament; LD 5 left dominant; NA 5 data not available; NE 5 nuclear envelope; ns 5 not significant; NT 5 nontruncating variants; OR 5 odds ratio; RD 5 right dominant; SND 5 sinus node dysfunction; T 5 truncating variants. Genes with significant excess in cases over ExAc reference samples.100 Other genes that have been identified in ACM with insufficient or conflicting evidence are ABCC9,112 TGFB3,113 TTN,114 CTNNA3,115 sarcomeric genes (MYH7, MYBPC3),116,117 SCN3B,117 CDH2,118,119 TJP1.120 e316 Heart Rhythm, Vol 16, No 11, November 2019 particularly for evaluating a single or a small number of genes. Sanger sequencing is also appropriate for cascade testing in at-risk family members, clinical confirmation of research genetic results, and cosegregation studies. However, large deletions and duplications of genes can be missed when using Sanger sequencing. It is well known that larger dele-tions and/or duplications (eg, in PKP2) are a known cause of ACM68,133,134 and can be identified in a small percentage of cases. Targeted NGS panels have the advantage that they are well validated, and it is well known which parts are insufficiently covered. Additional Sanger sequencing exper-iments are frequently used to evaluate the insufficiently covered regions.93 Bioinformatic tools must be added to the bioinformatics pipeline to identify deletions and/or dupli-cations in the genes of interest in targeted panel screening, a relatively inexpensive, fast, and reliable method to study larger series of genes. The results of exome sequencing, a relatively fast test, can be filtered against the set of core genes rather than evaluating all 20,0001 human genes. This reduces the chance of incidental findings. The major advantage of exome sequencing is that novel or additional genes can be easily added by “opening” the data whenever new dis-ease genes are established. On the downside, the quality and/or coverage of some parts of the “core genes” may be insufficient, and larger deletions and/or duplications can easily be missed. 3.8.5. Who to study 3.8.6. The role of genetic testing in arrhythmogenic cardiomyopathies A positive genetic test result (ie, likely pathogenic, class 4 or pathogenic variant, class 5) can (1) genetically confirm the clinical diagnosis and provide disease–gene-specific risk stratification and tailoring of therapies139 and (2) enable variant-specific cascade genetic testing of appropriate family members and relatives (see Section 3.9 Cascade Family Screening), including the potential for prenatal or preimplan-tation genetic diagnostics (a topic beyond the scope of this consensus statement). In the current Task Force Criteria for ARVC,10 the “Identification of a pathogenic mutation categorized as associated or probably associated with ARVC in the pa-tient under evaluation” is weighted as a major criterion in the “family history” section. A pathogenic mutation (now classified as either a class 4 or class 5 variant per ACMG nomenclature) is defined as “a DNA alteration associated with ARVC that alters or is expected to alter the encoded protein, is unobserved or rare in a large non-ARVC control population, and either alters or is pre-dicted to alter the structure or function of the protein or has demonstrated linkage to the disease phenotype in a conclusive pedigree.” Since a positive genetic test result is regarded as a major criterion, it will contribute up to 50% to the diagnosis of ARVC, thus highlighting the importance of an experienced genetic team. Nevertheless, there is the question of whether to put this much weight on a genetic result for which the true characteristics such as penetrance are generally not well known. COR LOE Recommendations References I C-EO For individuals and decedents with either a clinical or necropsy diagnosis of ACM, genetic testing of the established ACM-susceptibility genes is recommended. I C-EO For genetic testing of the established ACM-susceptibility genes, comprehensive analysis of all established genes with full coverage is recommended. A genetic test is generally performed in an index patient with either a clinical diagnosis that fulfills the clinical criteria for the disease in question or when there is at least a reasonable index of suspicion for that specific disorder. Both the selected disease gene panel and the subsequent genetic test interpretation should be strongly influenced by the veracity of the phenotype. The genetic testing of patients with nonspecific syncope or TWIs confined to only precordial lead V1, for example, should be strongly discouraged.135 When interpreting a genetic test, the available evidence that a specific gene is related to ACM should be taken into account. The test used should be of sufficient quality to identify variants in these genes. This may entail additional tests to cover all exons and additional bioinformatic and laboratory tests to identify deletions and duplications. For individuals who have died suddenly with a postmortem (likely) diagnosis of ACM or one of its subforms, postmortem genetic testing should again include those disease genes implicated in the necropsy diagnosis. Various sources to isolate DNA can be used, such as blood, frozen tissue, fibroblasts from a skin biopsy, and even formalin-fixed paraffin-embedded tissue.136,137 ACM-associated genes can also be evaluated in autopsy-negative SCD cases because ventricular arrhythmias leading to SCD may precede structural abnormalities.138 Table 3 lists the minimum set of genes to be evaluated. Towbin et al Evaluation, Risk Stratification, and Management of ACM e317 3.8.7. The use of a genetic test in risk stratification and management Whether the result of a genetic test can be used for risk strat-ification or management depends on the known relationship between genotype and phenotype. In general, there is limited evidence for a clinically actionable relationship between ge-notype and phenotype, with a few exceptions presented in the following subsections. 3.8.7.1. Left ventricular dysfunction LV dysfunction is most often present in patients with ACM with pathogenic or likely pathogenic variants in LMNA, BAG3, or one of the founder variants in the PLN and TMEM43 genes, followed by variants in DSP, DSG2/DSC2 and the lowest frequency in PKP2/JUP. This holds true for both index patients and family members.140,141 3.8.7.2. Multiple variants Approximately 3%–6% of patients have more than 1 pathogenic or likely pathogenic variant contributing to the disease phenotype. Patients with multiple pathogenic variant-mediated ACM have more severe disease, as reflected by an earlier age at disease onset92 and the pres-ence of VTs (,20 years vs 35 years for patients with a single ACM-causative variant),68 a higher lifetime risk of arrhythmia142 or SCD,143 and earlier progression to cardiomyopathy.141,144,145 3.8.7.3. Specific variants and genes 3.8.7.3.1. Desmosomal genes Disease expression reaching diagnostic criteria is most com-mon between 20 and 50 years of age (40%; 95% CI, 34%– 46%),146 although in one series, 9 of 40 pediatric desmo-somal gene-positive patients had the disease at a mean age of 17.8 6 5.1 years.147 LGE identified by CMR, most frequently seen in the LV myocardium, was the first evidence of disease expression in a small subset of individuals.75 In a comprehensive evaluation of 274 family members, the inci-dence of a new diagnosis (as per 2010 Task Force Criteria) in those aged 10–20 years was 0.5 per 100 person-years, and the odds ratio of a diagnosis in those aged ,18 years in the multivariate analysis was 0.37 (0.14–0.93), with no diagnosis reached under the age of 14 years. Likewise, a new diagnosis in relatives older than 60 years is less com-mon.146 The cumulative prevalence by decade is shown in Figure 9 based on data from Quarta et al.147 Overall, relatives have less severe disease compared with probands, are more commonly asymptomatic, and show dis-ease onset at an older age.145 Arrhythmic events in family members appear to occur only in the presence of manifest electrocardiographic and structural changes.146,148 Similar enhanced disease activity is observed in pediatric probands compared with their age-matched relatives.75 3.8.7.3.1.1. Desmoplakin (DSP) Pathogenic variants in DSP-encoded desmoplakin are asso-ciated with a spectrum of disorders, including cardio-cutaneous syndromes. For patients with likely pathogenic (class 4) or pathogenic (class 5) variants in DSP over 50% of index-patients and 17% of family members have an arrhythmic phenotype with LV dysfunction (Table 2).141 In addition to biventricular forms, left domi-nant forms are also present and extensive fibrotic patterns can be identified by CMR (see Section 5.5 Left Ventricular Noncompaction).140,158 3.8.7.3.2. Lamin A/C (LMNA) The cardiac phenotype for LMNA-mediated ACM is char-acterized by atrial fibrillation and cardiac conduction dis-ease, which can precede the development of ventricular arrhythmias and cardiomyopathy by decades.149,150 LMNA variants have also been identified in patients diagnosed with ARVC151–153 or more biventricular and left-dominant forms of the disease.127,154 Risk stratification has been reported from Asian and European populations.155,156 In the European study, nonsustained ventricular tachycardia (NSVT), left ventricular ejection fraction (LVEF) ,45% at first clinical contact, male sex, and nonmissense variants have been reported to be risk fac-tors for malignant ventricular arrhythmias.156 Patients with a LMNA variant who are in need of a pacemaker often receive an ICD, which is effective in treating possibly lethal tachyarrhythmias.157 3.8.7.3.3. Transmembrane protein 43 (TMEM43) The p.S358L mutation in transmembrane protein 43 (TMEM43) is a specific founder variant that has been identi-fied in a large number of patients diagnosed with ARVC from Europe and Canada (Newfoundland).132,159 Its clinical phenotype is characterized by poor R wave progression in precordial leads and LV enlargement in 43% of affected individuals, with 11% meeting the criteria for DCM.160 A study involving nearly 150 p.S358L-TMEM43-positive indi-viduals concluded that survival was greater for those treated with an ICD than for those with conventional, non-ICD care.160 3.8.7.3.4. Phospholamban (PLN) The pathogenic p.R14del-PLN variant has been identified in 1% of patients with ARVC in the United States and 12% of Dutch patients with ARVC,33 as well as in patients from Figure 9 Cumulative prevalence of disease expression in family members at risk of arrhythmogenic right ventricular cardiomyopathy (ARVC).147 e318 Heart Rhythm, Vol 16, No 11, November 2019 several other countries (Spain, Germany, Greece, Canada, Norway). Patients with this variant frequently have low-voltage ECGs and are considered to be at high risk for malignant ventricular arrhythmias and end-stage HF, with LVEF ,45% and sustained VT or NSVT as independent risk factors (see Section 5.4 Phospholamban).161 3.8.8. Limitations of genetic testing The ACMG has issued an updated list of over 50 actionable genes.162LaboratoriesperformingWESorWGS(generallyfor diagnostic odyssey cases) should report the presence of patho-genic or likely pathogenic variants residing in these genes, un-less the individual who is being tested has chosen not to receive these results. This list includes 5 of the established ARVC-susceptibility genes. For these incidental findings, however, the frequency of related clinical phenotypes in unselected pa-tient populations is generally not well established. When vari-ants in a known ARVC-susceptibility gene are identified in the context of a nonphenotype-driven incidental finding, the likeli-hood that this variant (even if graded as a class 4 or 5) portends the presence of ARVC or the risk of developing ARVC in the future is considered low, as was recently established for arrhythmia and ARVC-related genes.98,163 3.9. Cascade family screening See Evidence Table: Cascade Family Screening. Flow dia-gram of recommendations is shown in Figure 11. 3.9.1. Cascade family screening: screening recommendations in children and adults Clinical cascade testing refers to the cardiovascular and genetic evaluation of first-degree family members of an in-dividual (proband) with a confirmed diagnosis of ACM and is ideally performed within the confines of a multidisci-plinary cardiovascular genetics program, familiar with the clinical and genetic complexities of the condition.164 The underlying etiology of ACM in many cases is due to alter-ations in cardiac genes that encode proteins critical to normal heart development and/or function. For the most part, these are inherited as an autosomal-dominant trait, such that first-degree family members have a 50% a priori risk of developing ACM, although the penetrance and dis-ease severity are typically less in family members compared with probands.145 Detailed clinical and genetic familial evaluation, both at the time of diagnosis and during follow-up, will help determine the inheritance patterns and likelihood of consanguinity. Desmosomal variants are relatively common in control populations and may erroneously be considered disease-causing,94 although certain variants have a well-recognized association with the condition, and targeted genetic testing can be used in isolation within specific families. COR LOE Recommendations References IIa C-EO The interpretation of a cardiac genetic test by a team of providers with expertise in genetics and cardiology can be useful. Performing a genetic test on an index patient or relative has several aspects that must be considered and thus requires a comprehensive, expert team. There are specific test-related “technical” aspects that result in some variants not being detected by certain tests (see Section 3.8.4 Advantages and Disadvantages of Various Methods). The interpretation of a genetic test requires an accurate interpretation of variants. For example, class 1, 2, and 3 variants are not considered as actionable. The interpretation is also influenced by the pretest probability, which depends greatly on the precise clinical characterization of the phenotype. Additionally, the identification of a genetic defect does not necessarily predict the disease severity in that specific individual. When using a panel with more genes that underlay other phenotypes, incidental findings may be identified, such as likely pathogenic or pathogenic variants (class 4 and 5) that could lead to a different phenotype than the one that motivated the referral. Genetic testing can cause a mixture of positive and negative emotions for the patient. Genetic counselors can help patients and their families navigate these feelings and learn to live with this inherited condition. Genetic counselors can explain the implications of identified genetic variants in ways that alleviate anger, anxiety, fear, and guilt that are likely to occur in patients and their families. This expert team should therefore consist, at minimum, of cardiologists, clinical and molecular geneticists, genetic counselors, and pathologists, or individuals with expertise that encompass these subspecialties. Genetic Testing Who How For genetic testing of the established ACM-susceptibility genes, comprehensive analysis of all established genes with full coverage is recommended (COR I, LOE C-EO). For individuals and decedents with either a clinical or necropsy diagnosis of ACM, genetic testing of the established ACM-susceptibility genes is recommended (COR I, LOE C-EO). The interpretation of a cardiac genetic test by a team of providers with expertise in genetics and cardiology can be useful (COR IIa, LOE C-EO). Cascade Family Screening Figure 10 Genetic testing recommendations. 5 Cascade family screening: see Section 3.9 Cascade Family Screening. ACM 5 arrhythmo-genic cardiomyopathy; COR 5 Class of Recommendation; LOE 5 Level of Evidence. Colors correspond to COR in Figure 1. Towbin et al Evaluation, Risk Stratification, and Management of ACM e319 3.9.1.1. Family history 3.9.1.2. Cardiac evaluation The yield of cardiac screening is highly varied due to age-related and typically incomplete penetrance, and the disease spectrum can be diverse, even within families harboring the same variant, incorporating right-sided, left dominant, and biventricular phenotypes. Family members may display a relatively mild or incomplete phenotype, including subtle electrocardiographic or structural abnormalities. 3.9.1.3. Age-related penetrance of disease in at-risk relatives 3.9.1.4. Cascade cardiac investigations COR LOE Recommendations References I C-EO It is recommended that a genetic counselor or appropriately experienced clinician obtain a comprehensive 3-generation family history. A detailed 3-generation family history collected from the proband at their initial consultation is vital and should be obtained by a genetic counselor or an appropriately experienced clinician.165–168 The family history can be used to determine the existence of familial disease, provide important data regarding the full phenotypic spectrum within the family, and identify relatives who should be informed of the need for cardiac evaluation. COR LOE Recommendations References I B-NR It is recommended that first-degree relatives undergo clinical evaluation every 1–3 years starting at 10–12 years of age. 34,75,146, 160,161, 169,170 ACM variants can display incomplete penetrance and varied expression. In ARVC there is age-related penetrance with onset typically observed in the third and fourth decade of life, although this may vary with the underlying etiology and specific familial characteristics. Disease expression is, however, recognized in adolescents, although it is extremely rare under the age of 10 and is almost exclusively seen in probands.75,145,171 At-risk relatives who undergo clinical evaluation may be clinically affected, have borderline disease (incomplete penetrance), or be clinically unaffected. Serial evaluation can define ongoing disease expression and risk stratification.50,147 In a study of families with ARVC, the highest probability of a diagnosis of ARVC occurred between 20–50 years of age (40%; 95% CI, 34%–46%).146 COR LOE Recommendations References I B-NR Cardiovascular evaluation should include 12-lead ECG, ambulatory ECG, and cardiac imaging. 21,75,145–147, 172–174 Evaluation of all at-risk family members should include a 12-lead ECG, 24-hour Holter monitoring, and cardiac imaging. The exact imaging modality (echocardiogram, CMR, or CT) can vary depending on availability and institutional expertise. A study of relatives harboring a PKP2 causal variant identified in the proband showed that approximately one-third had a diagnosis of ARVC, one-third had borderline disease, and one-third were unaffected,172 although other studies have shown a much lower diagnostic rate among family members.173 In relatives who demonstrate disease features, electrocardiographic changes typically occur earlier and more commonly than structural changes,174 although subtle structural abnormalities can be identified by detailed echocardiographic analysis.77,175 LGE on CMR, most frequently observed in the LV myocardium, was the first evidence of disease expression in a small subset.75 COR LOE Recommendations References IIb C-LD Exercise stress testing (arrhythmia provocation) may be considered as a useful adjunct to cardiovascular evaluation. 148 In addition, exercise stress testing may expose a latent phenotype by initiating ventricular ectopy or arrhythmia.148 Symptoms such as syncope or palpitations should initiate an urgent evaluation. e320 Heart Rhythm, Vol 16, No 11, November 2019 3.9.1.5. Cascade genetic testing Whenalikelypathogenicorpathogenicgeneticvarianthasbeen identified in the proband, cascade genetic testing can be offered to first-degree at-risk relatives. Cascade genetic testing should only be offered in the context of comprehensive pretest genetic counseling, with the goal to discuss the process of genetic testing; the implications of the results for patients and their fam-ily members; social, lifestyle, and insurance implications; and an examination of patients’ feelings about either a positive or negative result.166,176 Inappropriate use of genetic testing in a family has the potential to introduce unnecessary worry and fear, as well as potential harms related to the misinterpretation of genetic variants.166,176 Cascade genetic testing is therefore only offered to family members where a likely pathogenic or pathogenic variant in a known disease-associated gene is iden-tified in the proband and can be interpreted with an appropriate level of expertise. Consideration must also be given to the fam-ily members’ psychosocial wellbeing. Efforts to ensure ongoing reclassification of variants are critically important for cascade genetic testing and families benefit from being managed in a specialized multidisciplinary cardiac genetic service. Ideally, systematic processes or a combined approach of relying on new information from the testing laboratories and review of family variants triggered by a family member returning for routine follow-up should be in place. 3.9.1.6. Cascade genetic testing in minors Cascade testing for familial variants in children remains contro-versial, given the complex medical, legal, and psychological is-sues involved. Testing is typically deferred until an age when clinical features are more likely, although this can be affected by the clinical disease spectrum and segregation of the variant in other family members, coupled with the specific preferences of the child and family. Genetic testing should always be guided COR LOE Recommendations References IIb C-EO In families with a variant classified as pathogenic, it may be reasonable for asymptomatic members of a family who do not have the familial variant and have a normal cardiovascular evaluation to be released from regular screening and educated to return if disease symptoms occur. At present, the key role of genetic testing for many ACM conditions is to identify asymptomatic carriers who can be targeted for closer surveillance or gene-negative relatives who are unlikely to develop disease and can be released from future screening.169 Comprehensive cardiovascular and genetic investigation will also help confirm variant status within the wider family. Family members who are comprehensively evaluated and who do not carry the pathogenic variant may be released from further regular evaluation, although they should be educated regarding specific symptoms and advised to seek further evaluation should these occur. + + + a. The use of genetic testing assumes prior identification of a pathogenic variant in the proband. b. May vary with age, lifestyle, and family history. c. Unless family history of disease suggests potential for earlier onset or presence of pathogenic variant in family supports presymptomatic genetic testing. CARDIOVASCULAR AND GENETIC EVALUATION a . --THREE-GENERATION PEDIGREE + -It is recommended that first-degree relatives undergo clinical evaluation every 1–3 years starting at 10–12 years of age (COR I, LOE B-NR). Cardiovascular evaluation should include 12-lead ECG, ambulatory ECG, and cardiac imaging (COR I, LOE B-NR). Exercise stress testing (arrhythmia provocation) may be considered as a useful adjunct to cardiovascular evaluation (COR IIb, LOE C-LD). In families with a variant classified as pathogenic, it may be reasonable for asymptomatic members of a family who do not have the familial variant and have a normal cardiovascular evaluation to be released from regular screening and educated to return if disease symptoms occur (COR IIb, LOE C-EO). It is recommended that a genetic counselor or appropriately experienced clinician obtain a comprehensive 3-generation family history (COR I, LOEC-EO). Figure 11 Summary of family screening recommendations. COR 5 Class of Recommendation; ECG 5 electrocardiogram; LOE 5 Level of Evidence. Colors correspond to COR in Figure 1. Towbin et al Evaluation, Risk Stratification, and Management of ACM e321 by the child’s best interest and performed by a multidisciplinary teamincludingspecialistcardiologists,geneticists,geneticcoun-selors, and psychologists with expertise in genetic counseling, variant interpretation, and disease management, when feasible. 3.10. Risk stratification and implantable cardioverter defibrillator decisions See Evidence Table: Risk Stratification and ICD Decisions. The recommendation flow diagram is shown in Figure 12. SCD is the most feared consequence of ACM. In a series of SCD occurrences in young individuals, ARVC accounts for up to 20% of the cases, particularly in certain genetic ethnic populations. There are fewer data on the contribution of other ACMs to SCD, likely a result of the difficulty of diagnosing these diseases postmortem. Prevention of SCD is possible with ICDs; identifying patients at risk of SCD is necessary to target those who should receive ICDs. Risk stratification is limited by the available data, nearly all of which are retrospective in nature and obtained from pa-tients referred to tertiary care centers. Also, some of the larger, more recent registry data almost certainly contain pa-tients who were previously reported in prior publications from the same center. Thus, the largest, most recent registries are the most reliable in terms of risk assessment. Most series include patients with ICDs; in fact, in some series an ICD is a requirement for entry into the registry. Appropriate therapies for VT, ventricular flutter (VFL), and VF are included as endpoints. ICD-treated arrhythmias are used as a surrogate for SCD, but there is abundant ev-idence that not all ICD-treated arrhythmias would have led to SCD. To make the SCD endpoint more specific and detailed, a number of studies have the separate endpoints of potentially life-threatening arrhythmias and all ventric-ular arrhythmias. In these studies, life-threatening arrhyth-mias are generally defined as SCD or hypotensive VT in patients without ICDs and ICD-treated VF or VFL 240 bpm in those with ICDs. All arrhythmias are generally defined as any sustained arrhythmia (.30 seconds) that spontaneously occurs and any ventricular arrhythmia treated by the ICD, including treatment with antitachycardia pacing or shock. Some registries include cardiovascular death, heart transplantation, and ventricular arrhythmias for a composite endpoint. An international collaboration of 18 centers from Europe and North Amer-ica developed a risk model,177 where male sex, relative youth, ECG, imaging features reflecting more extensive RV disease, and the severity of ventricular arrhythmia were the most accurate identifiers of the high-risk cohort studied. The model provides 1- to 5-year ventricular arrhythmia event-free survival rates for the predicted high-risk group and the potential to determine 5-year risk in the individual patient. COR LOE Recommendations References I C-EO The decision to implant an ICD in an individual with ACM should be a shared decision between the patient and the physician, taking into account the risks and benefits of the ICD over the potential longevity of the patient. A shared decision-making process is essential to clarify the anticipated benefits of an ICD for each individual patient. Potential options for therapy and the evidence supporting them are discussed to enable patients to make an informed decision. COR LOE Recommendations References I B-NR In individuals with ACM who have suffered a cardiac arrest with VT or VF, an ICD is recommended. 83,178–182 I B-NR In individuals with ACM who have sustained VT not hemodynamically tolerated, an ICD is recommended. 83,178–180, 182 As with other diseases, previous sustained ventricular arrhythmia is undoubtedly the strongest predictor of recurrent ventricular arrhythmia. Cohort studies that included patients with ARVC and ventricular arrhythmic (VT or VF) events prior to enrollment have shown that these arrhythmic events are a strong predictor of future life-threatening ventricular arrhythmias; an ICD can therefore be a life-saving device.83,178–180 COR LOE Recommendations References IIa B-NR In individuals with ACM and syncope suspected to be due to a ventricular arrhythmia, an ICD is reasonable. 82,83,178–180, 183–189 Syncope is a common symptom in young individuals, and it is important to clarify that syncope is likely due to a ventricular arrhythmia. In cohort studies, syncope is an independent predictor of future ventricular arrhythmic events.82,83,178–180,183,184 In the Pavia Registry, 73 of 301 patients followed for a mean of 5.8 years had a clinical outcome of SCD, aborted SCD, syncopal VT or electrical storm, or cardiovascular mortality,183 with a history of syncope being an independent predictor (hazard ratio [HR]: 4.38; P 5 .002). In the Hopkins Registry, 186 of 312 patients followed for 8.8 6 7.3 years had a clinical outcome of VT or VF with syncope as a univariate predictor (HR: 1.85; P 5 .021). e322 Heart Rhythm, Vol 16, No 11, November 2019 Decision for an ICD The decision to implant an ICD in an individual with ACM should be a shared decision between the patient and the physician, taking into account the risks and benefits of the ICD over the potential longevity of the patient (COR I, LOE C-EO). Cardiac arrest or sustained VT not hemodynamically tolerated? Hemodynamically tolerated sustained VT or syncope suspected due to ventricular arrhythmia? Phospholamban, FLNC mutation, or lamin A/C ACM with LVEF <45%? LVEF ≤35%? Major and minor criteria present? In individuals with ACM who have suffered a cardiac arrest with VT or VF, an ICD is recommended (COR I, LOE B-NR). In individuals with ACM who have sustained VT not hemodynamically tolerated, an ICD is recommended (COR I, LOE B-NR). Yes No Yes In individuals with ACM (other than ARVC) and hemodynamically tolerated VT, an ICD is recommended (COR I, LOE B-NR). In individuals with ACM and syncope suspected to be due to a ventricular arrhythmia, an ICD is reasonable (COR IIa, LOE B-NR). In individuals with ARVC with hemodynamically tolerated sustained VT, an ICD is reasonable (COR IIa, LOE B-NR). No In individuals with phospholamban cardiomyopathy and LVEF <45% or NSVT, an ICD is reasonable (COR IIa, LOE B-NR). Yes In individuals with lamin A/C ACM and two or more of the following: LVEF <45%, NSVT, male sex, an ICD is reasonable (COR IIa, LOE B-NR). No No In individuals with lamin A/C ACM and an indication for pacing, an ICD with pacing capabilities is reasonable (COR IIa, LOE C-LD). In individuals with FLNC ACM and an LVEF <45%, an ICD is reasonable (COR IIa, LOE C-LD). Yes ICD implantation may be reasonable for individuals with ARVC and two major, one major and two minor, or four minor risk factors for ventricular arrhythmia (COR IIb, LOE B-NR). ICD implantation is reasonable for individuals with ARVC and three major, two major and two minor, or one major and four minor risk factors for ventricular arrhythmia (COR IIa, LOE B-NR). Yes In individuals with ACM with LVEF 35% or lower and NYHA class I symptoms and an expected meaningful survival of greater than 1 year, an ICD is reasonable (COR IIa, LOE B-R). In individuals with ACM with LVEF 35% or lower and NYHA class II-III symptoms and an expected meaningful survival of greater than 1 year, an ICD is recommended (COR I, LOE B-R). Major criteria: NSVT, inducibility to VT at EPS, LVEF ≤49%. Minor criteria: male sex, >1000 PVCs/24 h, right ventricular dysfunction (as per major criteria of the 2010 Task Force Criteria), proband status, two or more desmosomal variants. If both NSVT and PVC criteria are present, then only NSVT can be used. Figure 12 Implantable cardioverter defibrillator (ICD) recommendations. See Section 5 Other Disorders for recommendations regarding left ventricular non-compaction. ACM 5 arrhythmogenic cardiomyopathy; ARVC 5 arrhythmogenic right ventricular cardiomyopathy; COR 5 Class of Recommendation; EPS 5 electrophysiology studies; FLNC 5 filamin-C; LOE 5 Level of Evidence; LVEF 5 left ventricular ejection fraction; NSVT 5 nonsustained ventricular tachy-cardia; NYHA 5 New York Heart Association; PVC 5 premature ventricular contraction; VF 5 ventricular fibrillation; VT 5 ventricular tachycardia. Colors correspond to COR in Figure 1. Towbin et al Evaluation, Risk Stratification, and Management of ACM e323 COR LOE Recommendations References IIa B-NR In individuals with ARVC with hemodynamically tolerated sustained VT, an ICD is reasonable. 178–180,183 Hemodynamically tolerated VT has also been associated with adverse arrhythmic outcomes. In the Pavia registry, 73 of 301 patients followed for a mean of 5.8 years had a clinical outcome of SCD, aborted SCD, syncopal VT or electrical storm, or cardiovascular mortality.183 Hemodynamically tolerated monomorphic VT was an independent predictor (HR: 2.19; P 5 .023). In the Hopkins registry, VT at presentation was a univariate predictor for VT (HR: 1.86; P ,.001). COR LOE Recommendations References IIa B-NR ICD implantation is reasonable for individuals with ARVC and three major, two major and two minor, or one major and four minor risk factors for ventricular arrhythmia. 179,180,183, 190 IIb B-NR ICD implantation may be reasonable for individuals with ARVC and two major, one major and two minor, or four minor risk factors for ventricular arrhythmia. 179,180,183, 190 Major criteria: NSVT, inducibility to VT at EPS, LVEF 49%. Minor criteria: male sex, .1000 premature ventricular contractions (PVCs)/24 hours, RV dysfunction (as per major criteria of the 2010 Task Force Criteria, see Figure 6), proband status, 2 or more desmosomal variants. If both NSVT and PVC criteria are present, then only NSVT can be used. The variables associated with VT/VF in more than one cohort include younger age at presentation (significant in 4 series) 83,179,180,190 and male sex (significant in 2 series).183,190 The variables associated with VT/VF in only one study include NSVT,82 PVC frequency .1000/24 hours,180 VT inducibility at EPS,180 atrial fibrillation,183 hemodynamically tolerated monomorphic VT,183 participation in strenuous exercise,183 and reduced LVEF.83 COR LOE Recommendations References I B-R In individuals with ACM with LVEF 35% or lower and NYHA class II-III symptoms and an expected meaningful survival of greater than 1 year, an ICD is recommended. 182,185–188, 191 ACM can be secondary to a wide variety of genetic defects and acquired abnormalities. Some may have structural and functional abnormalities that overlap with DCM. Etiologies are more likely to present early in their clinical course with ventricular arrhythmias, particularly the inherited cardiomyopathies caused by pathogenic variants in PLN, LMNA, FLNC, TMEM43, RBM20, and DES. In large randomized controlled trials that enrolled patients with DCM, ICDs improved survival. Patients enrolled in these trials had New York Heart Association (NYHA) class II or III symptoms and were undergoing guideline-directed medical therapy for HF. COR LOE Recommendations References IIa B-R In individuals with ACM with LVEF 35% or lower and NYHA class I symptoms and an expected meaningful survival of greater than 1 year, an ICD is reasonable. 187 The Defibrillators in Non-Ischemic Cardiomyopathy Treatment Evaluation (DEFINITE) trial included patients with nonischemic DCM and NYHA I symptoms, comprising 99 of 458 patients who were randomized to an ICD vs medical therapy for the prevention of SCD. COR LOE Recommendations References I B-NR In individuals with ACM (other than ARVC) and hemodynamically tolerated VT, an ICD is recommended. 156,161 IIa B-NR In individuals with phospholamban cardiomyopathy and LVEF ,45% or NSVT, an ICD is reasonable. 161 IIa B-NR In individuals with lamin A/C ACM and two or more of the following: LVEF ,45%, NSVT, male sex, an ICD is reasonable. 156 In a cohort of 403 patients from the Netherlands with the founder R14del variant in PLN, independent variables associated with malignant arrhythmias included LVEF ,45%, sustained VT or NSVT.161 Other variables were associated with malignant arrhythmias, but none of these remained significant after multivariate analysis. Although sustained VT was not studied in the lamin A/C cohorts, a finding of NSVT on Holter monitoring was a significant predictor for spontaneous VT/VF in a cohort of 269 patients.156 In patients with lamin A/C, several clinical variables are associated with the risk of spontaneous VT/VF or ICD-treated VT/VF. In a cohort of 269 patients from a European registry, NSVT on Holter monitoring, LVEF ,45%, and male sex were associated with VT/VF, but only if a patient had 2 or more of these factors.156 In an international registry of 122 patients, male sex, LVEF 50% at the first clinical contact, and nonmissense variants were independent predictors of arrhythmias.32 In this study, the risk of arrhythmia increased exponentially as the number of these predictors increased. During the 7-year follow-up, the incidence of sustained VT/VF was 9% with 1 of these risk factors, increasing to 28% with 2, 47% with 3, and 69% with 4 risk factors. e324 Heart Rhythm, Vol 16, No 11, November 2019 3.11. Management of ventricular arrhythmia and dysfunction 3.11.1. Medications including angiotensin-converting enzyme inhibitors, beta-blockers, and antiarrhythmic drugs See Evidence Table: Medical Therapy for Ventricular Arrhythmia and Dysfunction. Recommendation flow dia-grams are shown in Figure 13 and Figure 14. The aim of medical therapy in ACM is to control the ven-tricular dimensions and function, manage the congestive symptoms, and prevent and treat the arrhythmia. The man-agement of HF in ACM involves two different aspects of myocardial dysfunction: LV failure and RV failure. 3.11.1.1. Medical therapies for left ventricular failure ALVC that phenotypically overlaps with classic DCM pre-dominantly affects the LV. In this case, the treatment of symptomatic and asymptomatic HF with reduced ejection fraction (HFrEF) in the LV follows the current 2013 (updated in 2016) AHA/ACC7,8 and ESC guidelines.9 Guideline-directed therapies include angiotensin-converting enzyme (ACE) inhibitors and angiotensin receptor blockers (ARBs), beta-blockers, aldosterone antagonists, and, in selected cases, vasodilators (hydralazine and isosorbide dini-trate).8,9,192 The 2016 recommendations of the AHA/ACC7 and ESC guidelines9 include new drugs: the angiotensin receptor-neprilysin inhibitor (valsartan/sacubitril193) and the sinoatrial modulator, ivabradine.193,194 The therapy for congestive symptoms includes loop diuretics and volume control, with recommendations for a low-sodium diet.8,9 The benefit of digitalis for symptoms in patients with sinus rhythm has been debated; however, a recent retrospective analysis of the randomized Digitalis Investigation Group trial suggested that patients with LVEF ,40% (HFrEF) and patients with LVEF 40%–50% (HF with mid-range ejec-tion fraction, HFmrEF) had a benefit in terms of mortality and hospitalization (HFmrEF) or hospitalization only (HFrEF) from digitalis therapy.195 Additionally, patients with reduced LVEF may benefit from cardiac resynchroniza-tion therapy,196 LV-assist devices, and heart transplanta-tion.8,9 In a systematic review of 4 studies evaluating the use of digitalis for RV failure, which were limited to patients with cor pulmonale, there was no evidence of benefit in terms of improvement in RVEF, exercise capacity, or NYHA class. 3.11.1.2. Medical therapies for right ventricular failure COR LOE Recommendations References IIa C-LD In individuals with FLNC ACM and an LVEF ,45%, an ICD is reasonable. 34 Variants in FLNC are associated with several skeletal and cardiac myopathies. Variants in FLNC are associated with several skeletal and cardiac myopathies. Pathogenic variants in FLNC were recently recognized to cause ACM, resulting, in part, from the identification of truncation variants in 28 unrelated cardiomyopathy patients referred to a gene testing laboratory in Spain.34 Familial evaluation led to the identification of 54 individuals with FLNC variants. SCD and arrhythmias treated by an ICD were frequent. In the 12 patients with SCD, the mean LVEF was 39.6% 1 12%. COR LOE Recommendations References IIa C-LD In individuals with lamin A/C ACM and an indication for pacing, an ICD with pacing capabilities is reasonable. 149,170,189 In some cohort studies,149,170,189 AV block was a univariate predictor for VT or VF, thereby justifying consideration of an ICD if pacing is needed. COR LOE Recommendations References IIa C-EO In individuals with ACM and symptomatic RV dysfunction, the use of ACE inhibitors or ARBs, as well as beta-blockers, aldosterone antagonists, and diuretics, is reasonable. IIb C-EO In symptomatic individuals with ACM and RV dysfunction, the use of isosorbide dinitrate to reduce preload may be considered. The therapy to reverse ventricular remodeling in RV failure (typical of ARVC) is less established due to the lack of trials specifically addressing patients with ARVC. In an ARVC model of plakoglobin knockdown in mice, the preload-reducing treatment using a combination of diuretics and isosorbide dinitrate prevented the development of ARVC induced by endurance exercise training.197 These data suggest a potential benefit of preload-reducing therapy in early stages of RV remodeling. Towbin et al Evaluation, Risk Stratification, and Management of ACM e325 3.11.1.3. Antithrombotic therapy in arrhythmogenic cardiomyopathy In symptomatic individuals with ACM and RV dysfunction, the use of isosorbide dinitrate to reduce preload may be considered (COR IIb, LOE C-EO). In individuals with ACM and symptomatic RV dysfunction, the use of ACE inhibitors or ARBs, as well as beta-blockers, aldosterone antagonists, and diuretics, is reasonable (COR IIa, LOE C-EO). Atrial fibrillation, intracavitary thrombosis, or venous/systemic thromboembolism? RV dysfunction? Symptoms due to RV dysfunction? Yes Yes LV or RV aneurysm? Yes Antithrombotic therapy may be reasonable in individuals with LV or RV aneurysm (COR IIb, LOE C-EO). In individuals with ACM, in the presence of atrial fibrillation, intracavitary thrombosis, or venous/systemic thromboembolism, anticoagulant therapy is recommended (COR I, LOE B-NR). Yes Figure 13 Recommendations for ventricular dysfunction and antithrombotic medical therapy in individuals with arrhythmogenic cardiomyopathy (ACM). ACE 5 angiotensin-converting enzyme; ARB 5 angiotensin receptor blocker; COR 5 Class of Recommendation; LOE 5 Level of Evidence; LV 5 left ventricular; RV 5 right ventricular. Colors correspond to COR in Figure 1. COR LOE Recommendations References I B-NR In individuals with ACM, in the presence of atrial fibrillation, intracavitary thrombosis, or venous/systemic thromboembolism, anticoagulant therapy is recommended. 198 Patients with ARVC can develop “atrial disease” and predisposition to atrial tachyarrhythmias. In the absence of atrial fibrillation, however, there is no clear evidence of a benefit from anticoagulation compared with placebo or aspirin in HF.199,200 Specifically in the ARVC population, a study of 126 patients with ARVC found a relatively lower risk of thromboembolism in ARVC compared with LV HF; however, patients with severely dilated and hypokinetic RVs with slow blood flow and spontaneous echocardiographic contrast were at higher risk.198 Overall, anticoagulation is appropriate for the ACM population (ALVC and ARVC) to reduce the stroke risk in patients with atrial fibrillation in accordance with the current ACC/AHA and ESC guidelines for the management of atrial fibrillation,201,202 intracavitary thrombosis, and venous or systemic thromboembolism. In the absence of these factors, however, there is no evidence of a benefit from anticoagulation compared with placebo or aspirin. COR LOE Recommendations References IIb C-EO Antithrombotic therapy may be reasonable in individuals with LV or RV aneurysm. ACM may carry an increased risk of thromboembolic events. In ALVC risk is increased by intraventricular thrombus formation in severe LV dysfunction, and in ARVC by not only RV dysfunction but also localized aneurysms and sacculations. Furthermore, some patients with ARVC can develop “atrial disease” and predisposition to atrial tachyarrhythmias. There are no data to indicate antithrombotic therapy in isolated RV dysfunction. e326 Heart Rhythm, Vol 16, No 11, November 2019 3.11.1.4. Arrhythmia management COR LOE Recommendations References I C-LD Beta-blocker therapy is recommended in individuals with ACM with inappropriate ICD interventions resulting from sinus tachycardia, supraventricular tachycardia, or atrial fibrillation/flutter with high ventricular rate. 203 Beta-blocker therapy may prevent the occurrence of supraventricular arrhythmias within the programmed VT detection zone. Inappropriate ICD shocks, which are typically due to supraventricular arrhythmias, are to be avoided, and studies of HF patients have demonstrated improved outcomes when the number of inappropriate shocks is reduced.204–206 There are no randomized studies on specific beta-blockers in ACM. In the general population with HF, a nonrandomized post hoc substudy of the Multicenter Automatic Defibrillator Implantation with Cardiac Resynchronization Therapy (MADIT-CRT) trial showed the effectiveness of beta-blockers (carvedilol in particular) in reducing the number of inappropriate ICD therapies for patients who received an ICD with or without biventricular pacing.203 COR LOE Recommendations References IIa C-EO Beta-blocker therapy is reasonable in individuals with ACM who do not have an ICD. In patients clinically affected by ACM, beta-blockers can prevent adrenergic arrhythmias, exercise-induced arrhythmias, and ventricular remodeling, although there are no controlled clinical trials to unequivocally demonstrate the drugs’ benefit. In a cohort of well-characterized individuals with ARVC, beta-blockers were not significantly effective.183 In unaffected carriers (genotype-positive or phenotype-negative), the lack of information currently does not support long-term beta-blocker therapy. COR LOE Recommendations References IIb B-NR Amiodarone (LOE B-NR) and sotalol (LOE C-LD) may be reasonable in individuals with ACM for control of arrhythmic symptoms or to reduce ICD shocks. 183,207,208 C-LD In patients with ventricular arrhythmias, antiarrhythmic therapy can be used to control symptoms. In a study of 95 patients with ARVC, the most effective drug appeared to be amiodarone,207 whereas there was no significant evidence for the efficacy of sotalol and beta-blockers. In a more recent series of 301 patients with ARVC, however, neither beta-blocker, amiodarone, nor sotalol reduced life-threatening arrhythmic events.183 Given that patients with ARVC are predominantly younger than conventional HF patients, sotalol therapy before amiodarone in the earlier phases of the disease can be justified to avoid long-term use and prevent the adverse extracardiac effects of amiodarone, although there are no robust data to support this approach in this ARVC patient population. The Optimal Pharmacological Therapy in Implantable Cardioverter Defibrillator Patients (OPTIC) trial randomized 412 patients with an ICD (but not specifically ACM) and inducible or spontaneous VT or VF to treatment with amiodarone with a beta-blocker, sotalol alone, or a beta-blocker alone.208 Sotalol showed a trend to reduce all-cause ICD shocks at 1 year from 38.5% to 24.3% (HR: 0.61; P 5 .055). Patients treated with sotalol should have a normal or near-normal QT interval at baseline, and normal or near-normal renal function. Compared with beta-blocker therapy alone, amiodarone reduced the number of ICD shocks (HR 0.27; P ,.001),208 but this came at the cost of more adverse effects.43 COR LOE Recommendations References IIb C-LD Flecainide in combination with beta-blockers and in the absence of other antiarrhythmic drugs may be reasonable in individuals with ACM, an ICD, and preserved LV and RV function for control of ventricular arrhythmias that are refractory to other therapies. 209 In a small series of patients, the addition of flecainide in combination with sotalol or metoprolol was found to be effective for controlling ventricular arrhythmias in patients with an ICD and ARVC refractory to single-agent therapy and/or catheter ablation.209 Data from patients with CPVT, including a recent randomized trial,210 also suggest the efficacy of flecainide in these patients, which could be extrapolated to the population with ARVC.211 Overall, these findings suggest the potential benefit of flecainide in combination with beta-blockers for patients with refractory ventricular arrhythmias. Towbin et al Evaluation, Risk Stratification, and Management of ACM e327 3.11.2. Role of catheter ablation See Evidence Table: Catheter Ablation. A recommendation flow diagram is shown in Figure 15. ICD? Inappropriate ICD therapies? Yes Yes Symptomatic arrhythmias? Beta-blocker therapy is recommended in ACM individuals with inappropriate ICD interventions resulting from sinus tachycardia, supraventricular tachycardia, or atrial fibrillation/flutter with high ventricular rate (COR I, LOE C-LD). Beta-blocker therapy is reasonable in ACM individuals who do not have an ICD (COR IIa, LOE C-EO). No Preserved RV and LV function? Continued symptoms due to ventricular arrhythmias? Yes Yes Flecainide in combination with beta-blockers and in the absence of other antiarrhythmic drugs may be reasonable in individuals with ACM, an ICD, and preserved LV and RV function for control of ventricular arrhythmias that are refractory to other therapies (COR IIb, LOE C-LD). Amiodarone (LOE B-NR) and sotalol (LOE C-LD) may be reasonable in individuals with ACM for control of arrhythmic symptoms or to reduce ICD shocks (COR IIb). Yes Figure 14 Medical therapy recommendations for arrhythmias. ACM 5 arrhythmogenic cardiomyopathy; COR 5 Class of Recommendation; ICD 5 implant-able cardioverter defibrillator; LOE 5 Level of Evidence; LV = left ventricular; RV = right ventricular. Colors correspond to COR in Figure 1. COR LOE Recommendations References IIa B-NR In individuals with ACM and recurrent sustained monomorphic VT who have failed or are intolerant of amiodarone, catheter ablation is reasonable for reducing recurrent VT and ICD shocks. 212–222 Catheter ablation is a well-studied therapy for almost all forms of cardiomyopathy, especially for patients with ischemic scars and those with idiopathic dilated cardiomyopathies.223–225 Catheter ablation is recognized as a central treatment option for patients with ventricular arrhythmias who have received therapies from their ICDs, and in the context of failure or intolerance of antiarrhythmic drugs.213,214 For ARVC, evidence from single-center and multicenter cohorts has demonstrated the effectiveness of ablation in reducing the incidence of recurrent VT events and ICD shocks.218–222 (Continued) e328 Heart Rhythm, Vol 16, No 11, November 2019 Current technology and techniques suggest that electroa-natomical mapping supports better outcomes, which is routinely employed at all major centers. These methods are routinely employed to more accurately define scars and dis-ease. Ablations for the more unusual cardiomyopathies are performed at high-volume referral centers, which are more accustomed to the idiosyncrasies of each cardiomyopathy subtype. These centers provide highly trained operating room staff, anesthesiologists, and surgical backup. Catheter ablation for patients with ARVC should not be considered curative for the underlying arrhythmogenic sub-strate and is ultimately aimed at improving quality of life by limiting symptomatic ectopy, sustained arrhythmia, and especially ICD therapies. There are insufficient data showing disease progression is affected, sudden death is prevented, or mortality is reduced. 3.12. Prevention of disease progression See Evidence Table: Exercise Restriction. A recommenda-tion flow diagram is shown in Figure 16. Clinicians have long recognized that patients with ARVC were disproportionately athletes238 and that athletic patients with ARVC have a high risk of SCD.239 A seminal review of autopsies in Italy showed that participation in competitive athletics resulted in a more than 5-fold increase in SCD risk among adolescents and young adults with ARVC240 and that implementing a preparticipation screening program resulted in a sharp decline in deaths.241 The discovery that pathogenic variants in genes encoding the cardiac desmosome were present in up to 60% of patients with ARVC provided insight into the connection between ex-ercise and ARVC.145 Murine ARVC models with abnormal expression of desmosomal proteins have consistently shown exercise-induced or exercise-exacerbated cardiovascular phenotypes.242–246 Defining the molecular mechanisms of this process is an active area of research. These discoveries also prompted research to more precisely define the role of exercise in penetrance, arrhythmic risk, and structuralprogressioninpatientswithARVCandtheirat-riskrel-atives. These studies (reviewed below) make a compelling case Although the outcomes are dictated more by the underlying arrhythmogenic substrate and the disease process, there are nevertheless similarities in the pathophysiology and strategies for catheter ablation across all forms of structural heart disease. Compared with patients with healthy hearts, patients with structural heart disease (including those with ischemic heart disease, DCMs, and all forms of ACM) all retain a diseased ventricular myocardium and various degrees of fibrosis or scars. These are fundamental substrates for reentrant ventricular arrhythmia and can therefore be targeted if the patient presents with monomorphic VT.226–229 Ablation for all forms of structural heart disease is aimed at removing or ameliorating this arrhythmogenic element, and extrapolation is therefore employed in this section, given the limited data for the rare or less-defined cardiomyopathies. Catheter ablation of VT associated with LMNA cardiomyopathy is associated with poor outcomes, including a high rate of arrhythmia recurrence, progression to end-stage HF, and high mortality.230 There are only isolated case reports for catheter ablation of VT in patients with LVNC,231,232 cardiac amyloidosis,233,234 and Fabry disease235; the bulk of the data concern procedural approaches and outcomes for patients with arrhythmogenic RV dysplasia or cardiomyopathy.216,218–222,236,237 COR LOE Recommendations References IIa B-NR In individuals with ACM and recurrent symptomatic sustained VT in whom antiarrhythmic medications are ineffective or not tolerated, catheter ablation with availability of a combined endocardial/epicardial approach is reasonable. 216,218–222 IIa C-EO In symptomatic individuals with ACM and a high burden of ventricular ectopy or nonsustained VT in whom beta-blockers and/or antiarrhythmic medications are ineffective or not tolerated, catheter ablation with availability of a combined endocardial/epicardial approach is reasonable. Unlike many ischemic cardiomyopathies in which the diseased substrate is easily accessed transvenously, arrhythmogenic RV dysplasia and cardiomyopathy frequently require an epicardial approach, which is directly related to the location of the diseased tissue.216,218–222,236,237 This particular approach has been relatively well-studied in terms of outcomes and technical approach. Freedom from ventricular arrhythmias and ICD therapies is definitively improved with combined endocardial and epicardial ablation. Interrupting the diseased substrate and targeting the clinical VT have provided higher long-term success rates of approximately 60%–80%. Therefore, a combined endocardial and epicardial approach is helpful when targeting symptomatic ventricular arrhythmias. These recommendations do not address the separate question of how to approach a patient who has already failed an endocardial approach or whether an epicardial approach should be employed as the first line. COR LOE Recommendations References IIb C-LD Individuals with ACM and recurrent symptomatic sustained VT in whom medical therapy has not failed may be considered for catheter ablation. 216,218,220 In some catheter ablation studies of patients with ACM, antiarrhythmic drug therapy was not mandated for inclusion. However, the number of such patients included in these studies was limited. This recommendation addresses patients with recurrent symptomatic sustained VT who desire ablation either as first-line treatment or to reduce or avoid medical therapy that has been effective. (Continued) Towbin et al Evaluation, Risk Stratification, and Management of ACM e329 that (1) there is a dose-dependent relationship between exercise exposure and ARVC onset (penetrance) and severity; and (2) frequent high-intensity or competitive exercise in patients with established ARVC is associated with worse clinical outcomes. 3.12.1. Clinical exercise questions to direct a literature search In this section, we used the PICO format to construct ques-tions to direct a literature search. The following questions were analyzed: Yes No Sustained monomorphic VT? In individuals with ACM and recurrent sustained monomorphic VT who have failed or are intolerant of amiodarone, catheter ablation is reasonable for reducing recurrent VT and ICD shocks (COR IIa, LOE B-NR). In individuals with ACM and recurrent symptomatic sustained VT in whom antiarrhythmic medications are ineffective or not tolerated, catheter ablation with availability of a combined endocardial/epicardial approach is reasonable (COR IIa, LOE B-NR). In symptomatic individuals with ACM and a high burden of ventricular ectopy or nonsustained VT in whom beta-blockers and/or antiarrhythmic medications are ineffective or not tolerated, catheter ablation with availability of a combined endocardial/epicardial approach is reasonable (COR IIa, LOE C-EO). Individuals with ACM and recurrent symptomatic sustained VT in whom medical therapy has not failed may be considered for catheter ablation (COR IIb, LOE C-LD). Amiodarone ineffective or not tolerated? Nonsustained VT or high burden of ventricular ectopy? Antiarrhythmic medications ineffective or not tolerated? Beta-blockers and/or antiarrhythmic medications ineffective or not tolerated? Yes Yes Yes This recommendation does not exclude the choice to continue medical therapy that has not failed and not proceed with ablation Decision for catheter ablation Yes Figure 15 Catheter ablation recommendations for individuals with arrhythmogenic cardiomyopathy (ACM). COR 5 Class of Recommendation; ICD 5 implantable cardioverter defibrillator; LOE 5 Level of Evidence; VT 5 ventricular tachycardia. Colors correspond to COR in Figure 1. e330 Heart Rhythm, Vol 16, No 11, November 2019 1. Should a family member who is mutation-positive but phenotype-negative be restricted from strenuous exercise to prevent ARVC disease expression? 2. Should patients who meet Task Force Criteria for the diagnosis of ARVC, regardless of symptoms or disease severity, be restricted from strenuous exercise, compared to no restriction, to prevent VT or VF? 3. Should patients who meet Task Force Criteria for the diag-nosis of ARVC, regardless of symptoms or disease severity, be restricted from strenuous exercise, compared to no re-striction, to prevent progression of RV or LV dysfunction? 3.12.2. Exercise definitions To best translate the results of these studies to clinical prac-tice, it is important to consider how each study collects ex-ercise history and defines an individual as an athlete. Physical activity has 4 broad dimensions: (1) mode or type of activity, (2) frequency, (3) duration, and (4) inten-sity.247 Activity can be considered recreational or compet-itive and categorized based on peak static and dynamic demand. Here, we define “endurance” exercise as that with a moderate or high dynamic demand as per the AHA/ACC Scientific Statement for Eligibility and Disqualification Recommendations for Competitive Athletes with Cardiovascular Abnormalities248 (class C and B activities). Similarly, we define “competitive exer-cise” as “participation in an organized team or individual sport that requires regular competition against others as a central component, places a high premium on excellence and achievement and requires some form of systematic (and usually intense) training,” consistent with these guide-lines.249 Intensity, duration, and frequency of aerobic physical activity can be integrated into one measure (metabolic equivalent [MET]-minutes/week) for an exercise “dose.” For instance, the AHA minimum recommended exercise for healthy adults is 450–750 MET-minutes weekly.250 A MET is the ratio of the work metabolic rate to the resting metabolic rate. Vigorous-intensity activities are generally considered those requiring 6 METs.251 The 2011 Adult Compendium of Physical Activities providesacomprehensivelistingoftheMETsassociatedwithava-riety of physical activities ( compendiumofphysicalactivities/).252 Figure 17 includes examples of METs associated with common types of endurance exercise. Clinicians should counsel adolescent and adult individuals with a positive genetic test for ARVC but who are phenotype-negative that competitive or frequent high-intensity endurance exercise is associated with increased likelihood of developing ARVC and ventricular arrhythmias (COR I, LOE B-NR). Meets diagnostic criteria for ARVC? Genotype-positive family member? Yes Individuals with ARVC should not participate in competitive or frequent high-intensity endurance exercise as this is associated with increased risk of ventricular arrhythmias and promoting progression of structural disease (COR III: Harm, LOE B-NR). No Yes Competitive exercise: Includes regular competition and systematic intense training. Endurance exercise: Class B (moderate): such as downhill skiing, figure skating, running (sprint), volleyball. Class C (high): such as long-distance running, cross-country skiing, rowing, basketball. Figure 16 Exercise recommendations for individuals with arrhythmogenic right ventricular cardiomyopathy (ARVC). COR 5 Class of Recommendation; LOE 5 Level of Evidence. Colors correspond to COR in Figure 1. Towbin et al Evaluation, Risk Stratification, and Management of ACM e331 3.12.3. Exercise increases age-related penetrance among genotype-positive relatives Evidence from several retrospective studies suggests there is a dose-dependent relationship between endurance exercise and the likelihood of developing ARVC. A study of 87 carriers of heterozygous desmosomal variants showed that participation in vigorous endurance athletics and a longer duration of annual exercise were associated with an increased likelihood of ARVC diagnosis and of developing sustained ventricular arrhyth-mias.253 Endurance athletes were defined as participants in a sport with a high dynamic demand248 for at least 50 hours per year at vigorous intensity. A separate analysis using the same definitions171 further showed that patients with ARVC with adolescent onset were significantly more likely to have been endurance athletes during their youth than were patients with ARVC diagnosed as adults. Finally, a third study confirmed that in 10 families with a segregating PKP2 variant, family members who developed ARVC were more likely to be athletes and to have engaged in a significantly higher exercise dose across their lifespan than family members without disease.254 Consistent with this, Saberniak et al 255 showed that athletes (1440 MET-minutes/week for a minimum of 6 years) were more likely to be diagnosed with ARVC, and the age of starting athletic training was correlated with age of ICD implantation, suggesting a temporal relationship between the timing of exer-cise exposure and disease onset. This study also illustrated a linear relationship between the amount of physical activity and the extent of RV and LV dysfunction in patients and at-risk family members. Among asymptomatic family members, athletes had worse LV function and more RV abnormalities. It is important to recognize that most of these data are from car-riers of PKP2 variants, and the association between exercise and penetrance in carriers of other desmosomal and nondesmo-somal ARVC-related variants awaits confirmation. Nonetheless, taken together, these studies establish that the likelihood that genotype-positive relatives of patients with ARVC will develop disease is strongly associated with frequent endurance exercise. Thus, presymptomatic genetic testing not only facilitates early diagnosis but also provides the opportu-nity to decrease the risk of developing ARVC through lifestyle changes. Clinicians should counsel these patients that compet-itive or frequent high-intensity endurance exercise is associated with an increased likelihood of developing ARVC. 3.12.3.1. Exercise for carriers of pathogenic variants detected incidentally It is important to recognize that these data are from genotype-positive patients who are also relatives of patients with ARVC. ARVC-associated pathogenic variants are increas-ingly identified through population-based sequencing studies and direct-to-consumer genetic testing.11 The desmosomal genes are also included in the list of 59 genes recommended by the ACMG for return when discovered as secondary find-ings.162 Research suggests that the penetrance of variants METs Examples of METs associated with endurance exercise 16 Competitive cycling 15 Cross-country ski racing (>8.0 mph) 12 Canoeing, rowing, crew in competition 10 Soccer, competitive 9.8 Running–6 mph (10 minutes/mile) 8 Basketball game 7 Racquetball 5.8 Swimming laps, freestyle–light-moderate effort 5.3 Downhill skiing–moderate effort 5 Walking for exercise–4 mph (very brisk pace, level, firm surface) 4.8 Golf 3.5 Walking for pleasure or transportation 3.3 Sailing (boat and board sailing, windsurfing, ice sailing) 3 Canoeing/rowing for pleasure 2.5 Yoga Intensity High Low Frequency Never/ Rare Regular Figure 17 Metabolicequivalents(METs)associatedwithcommontypesofenduranceexercise( Inverse association between intensity of exercise (METs) and recommended frequency of participation among patients with arrhythmogenic right ventricular cardiomyopathy (ARVC). Aiding patients and at-risk family members in making choices about participation in various types of exercise involves ongoing discussion and shared decision making. Based on data suggesting that higher exercise intensity and doses (intensityduration) are associated with poorer out-comes,254,255,258,259 vigorous-intensity activities (red/orange) should be performed rarely if at all, and lower-intensity activities (green) more regularly. This figure is provided to aid the clinician in understanding METs associated with a variety of common activities252 and to aid in discussions with patients and families. e332 Heart Rhythm, Vol 16, No 11, November 2019 detected in this setting is lower than for family members iden-tified through cascade testing.163 The benefit of limiting frequent high-intensity or competitive endurance exercise for these patients may thus be lower but requires further study. 3.12.3.2. Exercise and relatives of “gene-elusive” patients with arrhythmogenic right ventricular cardiomyopathy Evidence is emerging that there is a cohort of athletic patients with ARVC without pathogenic variants who may have a largely exercise-induced form of disease. These patients are characterized by very high levels of athletic activity, no identifi-able pathogenic desmosomal variant, and an absence of family history.256,257 Unaffected family members of such patients with a normal initial evaluation may have a considerably lower likelihood of developing ARVC. These patients should undergo cardiac evaluation every 1–3 years as described in Section 3.9 Cascade Family Screening. At present, however, there is no strong evidence to recommend limiting exercise. 3.12.3.3. Exercise increases arrhythmic risk and structural dysfunction in patients with arrhythmogenic right ventricular cardiomyopathy In contrast to the still-limited data available to inform rec-ommendations for patients with a positive genetic test for ARVC but who are phenotype-negative (genotype-positive, phenotype-negative), a growing group of studies have consistently shown that competitive or frequent high-intensity endurance exercise is associated with a higher risk of ventricular arrhythmias regardless of geno-type.183,253,255,256,258,259 Although the definitions of athletic activity vary across these studies, the outcomes are the same, with participation in high-intensity, stren-uous, competitive, high-duration exercise associated with poorer survival free from sustained ventricular arrhyth-mias. This result is not surprising, given that data from au-topsy studies have shown that ARVC-related SCD often occurs with vigorous exercise.260,261 Recently, Lie et al259 further established that while high-intensity and long-duration exercise were associated with ventricular arrhythmias, intensity remained an independent predictor after adjusting for duration, highlighting the importance of limiting high-intensity exercise. Several studies have suggested that the risk of arrhythmias during follow-up can be modified by reducing exercise. Desmosomal variant carriers who reduced their exercise after the clinical presentation had a lower incidence of ventricular arrhythmia compared with patients who continued to partic-ipate in intense and/or long-duration exercise.253 This finding was replicated in a study of 108 probands from the North American ARVC Registry that showed patients who continued self-defined competitive exercise had a signifi-cantly worse arrhythmic course.258 In contrast, there were no significant differences in the risk of ventricular arrhyth-mias or death between the inactive patients and the recrea-tional athletes, although recreational athletes had worse LV function. Finally, Wang et al262 showed that, among 129 pa-tients with ARVC with ICDs, patients who reduced their ex-ercise dose (MET-hours/year) the most had the best survival from ICD therapy in follow-up. These data suggest that gene-elusive patients and those who have had an ICD implanted for primary prevention may benefit the most from reducing their exercise dose. The extent of both RV and LV structural dysfunction is also correlated with exercise history for patients with ARVC. This finding was first observed by Sen-Chowdhry et al,80 who found that, of 116 patients with ARVC, the 11 patients who partici-pated in long-term endurance training had more severe RV dysfunction. Sawant et al showed that among nondesmosomal “gene-elusive” patients with ARVC, those who had performed a higher average MET-hours-year of exercise were most likely to have major RV structural abnormalities.256 Saberniak et al performed an extensive analysis and demonstrated that RV and LV function was significantly reduced in athletes and that exercise was correlated with the extent of structural dysfunction in a dose-dependent fashion.255 Although no study has prospectively assessed the effect of exercise reduction on structural progression, athletic activity is associated with poor clinical outcomes. Saberniak et al255 showed that only athletes progressed to transplantation, while James et al253 showed that only athletes developed class C HF. 3.12.4. Exercise and other arrhythmogenic cardiomyopathies In contrast to ARVC, there are limited genotype-specific data from which to make exercise recommendations for other ACMs. Similar to desmosomal and “gene-elusive” ARVC patients, ventricular arrhythmias occur disproportionately during exercise in patients with the R14del PLN variant.161 Preliminary studies suggest, however, that a history of athletics is not associated with disease penetrance in these patients. COR LOE Recommendations References I B-NR Clinicians should counsel adolescent and adult individuals with a positive genetic test for ARVC but who are phenotype-negative that competitive or frequent high-intensity endurance exercise is associated with increased likelihood of developing ARVC and ventricular arrhythmias. 171,253–255 Competitive or frequent high-intensity endurance exercise increases the risk of developing RV and LV dysfunction. Athletic activity prior to and after disease presentation also increases the risk of ventricular arrhythmias and is associated with poorer survival from sustained ventricular arrhythmias.171,253–255 A positive genetic test indicates a pathogenic or likely pathogenic variant in an ARVC-associated gene per the ACMG guidelines for variant adjudication.95 Towbin et al Evaluation, Risk Stratification, and Management of ACM e333 Competitive exercise includes participation in “an orga-nized team or individual sport that requires regular competi-tion against others as a central component, places a high premium on excellence and achievement, and requires some form of systematic (and usually intense) training” as defined by the AHA/ACC Scientific Statement for Eligibility and Disqualification Recommendations for Competitive Ath-letes with Cardiovascular Abnormalities.249 Endurance exercise includes class C and B sports in these guidelines.248 Data on the effect of static exercise (Class A) on outcomes are largely absent from the literature. Intensity is typically measured in METs.252 For the basic science details of the mechanisms respon-sible for the forms of ACM, please see Section 4 Disease Mechanisms. Section 4 Disease mechanisms An overview of some of the disease mechanisms for ACM is shown in Figure 18. 4.1. Desmosomal defects The cardiac ID is a highly organized structure that connects adjacent cardiomyocytes and is classically comprised of three main structures: (1) gap junctions (GJs), which metabolically and electrically connect the cytoplasm of adjacent cardiomyo-cytes; (2) adherens junctions (AJs), which connect the actin cytoskeleton of adjacent cells; and (3) desmosomes, which function as cell anchors and connect intermediate filaments (IFs). In addition, ion channels reside in the ID. Pathologic ge-netic variants in ID proteins have been associated with cardiac arrhythmias, such as BrS, ARVC, and other genetically deter-mined ACMs.263,264 However, rather than being independent, all ID components work closely together by partnering with multifunctional proteins such as ZO-1, ankyrin G, and b-cate-nin, allowing the ID to integrate mechanical and electrical func-tions. GJs form a plaque surrounded by the perinexus in which free connexons reside; the connexome integrates sodium (NaV) channels, the desmosome, and GJs; and the area composita hosts AJs and desmosomes, also integrated as adhering junc-tions. Furthermore, the transitional junction connects sarco-meres to the plasma membrane. The ID ensures rapid propagation of the electrical signal that initiates contraction throughout the heart and allows the cardiomyocytes to with-stand the strong mechanical forces imposed by the beating of the heart. AJs, desmosomes, GJs, and ion channels form a func-tional unit as the area composita. Furthermore, GJs and ion channels likely create and propagate action potentials (APs) together. Some structural components of cell–cell junctions can also interact with other ID proteins or function in signaling pathways, such as Cx43 and b-catenin. Protein deficiencies can ultimately lead not only to mechanical dysfunction (eg, AJ dysfunction) but also to arrhythmias, often via GJ remodeling, thereby illustrating the interdependency of ID components and the coupling of mechanical and electrical elements. The lateral membrane (LM) of cardiomyocytes has a different makeup compared to the ID, hosting, among others, costameres and focal adhesions and linking sarcomeres to the extracellular matrix. The ID and LM have several proteins in common, such as vinculin and a-actinin, and ion channels. The AJ is the primary anchor for myofibrils and connects actin filaments from adjacent cells, which allows the cell to retain its shape under mechanical stress. Furthermore, the AJ transduces signals concerning the actin cytoskeleton and senses mechanical forces on the cell. The transmembrane protein N-cadherin is the main constituent of AJs and homodimerizes with N-cadherins from adjacent cells in the extracellular space, acting as an intercellular zipper. This action provides tissue specificity during development, allowing cells to interact only with cells expressing the same cadherin. Calcium ions ensure the rod shape of N-cadherin, the intracellular domain which pri-marily binds b-catenin. N-cadherin also possesses regulatory functions including a role in mechanosensing. b-catenin directly interacts with the C-terminal cytoplasmic domain of N-cadherin. By associating with a-catenin and vinculin, b-cat-enin connects AJs to the actin cytoskeleton. b-catenin also plays a central role in cadherin-mediated signaling and can activate the canonical Wnt signaling pathway. b-catenin translocates to the nucleus when Wnt binds its Frizzled receptor, to initiate transcription of transcription factors of the T-cell factor/lymphoid enhancer-binding factor family. The canonical Wnt pathway is crucial in cardiac devel-opment but has also been proposed as the key mechanism in certain cardiomyopathies (ie, activation induces cardiac hyper-trophy). Therefore, N-cadherin has been thought to sequester b-catenin to prevent Wnt activation. Activation of the Wnt pathway increases expression of the GJ protein Cx43, and the C-terminus of Cx43 can interact with b-catenin. When Wnt is not present, cytoplasmic b-catenin is targeted for degra-dation by the proteasome. COR LOE Recommendations References III: Harm B-NR Individuals with ARVC should not participate in competitive or frequent high-intensity endurance exercise as this is associated with increased risk of ventricular arrhythmias and promoting progression of structural disease. 80,171,183,253–255,258 Competitive or frequent high-intensity endurance exercise is related to the extent of RV and LV dysfunction in patients with ARVC. Additionally, such exercise is associated with poorer outcomes for ventricular arrhythmias, whereas reducing exercise has a more favorable arrhythmic prognosis. Aiding patients and at-risk family members in making choices about participation in various types of exercise involves ongoing discussion and shared decision making. e334 Heart Rhythm, Vol 16, No 11, November 2019 Although AJs also transduce forces to the cytoskeleton, desmosomes are more robust, thanks to their connection to mechanically resilient IFs. The intercellular part of the car-diac desmosome is built up by the cadherins desmoglein-2 (DSG2) and desmocollin-2 (DSC2), which bind in a heterol-ogous way. The armadillo proteins junction plakoglobin (JUP) and plakophilin-2 (PKP2), and desmoplakin (DSP) (a member of the plakin superfamily), connect desmin to the desmosome. When DSC2 and DSG2 are bound, the hy-peradhesive state of the desmosome depends on the presence of calcium ions. Considering the major desmosomal proteins, PKP2 is associated with GJs and is required for the organization of ID and desmosomal function. Together with JUP, PKP2 me-diates attachment to IFs. PKP2 knockdown causes a decrease in conduction velocity and an increased propensity to develop re-entry arrhythmias, whereas PKP2 variants are most common in hereditary ACM. Plakoglobin is present in both desmosomes and AJs. Desmoplakin connects the des-mosomes to the type III IF protein desmin, and its N-terminal and C-terminal domains and the a-helical domain in between are each almost 1000 amino acids long and interaction with PKP2 occurs at their N-terminal domains. DSG2 pathogenic gene variants are, like all other cardiac desmosomal proteins, associated with ACM. The most prominent ACM desmosomal gene mutations include PKP2 and DSP, with desmosomal cadherins DSG2 and DSC2, and JUP being less common.14,25,169,265 The majority of these genes primarily cause ARVC, although pathogenic variants in DSP cause a substantial amount of ALVC. Other “nondesmosomal” genes, such as TGFB3 and TMEM43, disrupt the function of desmosomes.132,266,267 One of the more recently described causative genes is CDH2, encoding N-cadherin, another member of the cadherin superfamily of predominantly Ca21-dependent cell surface adhesion proteins.118,119 In the report by Mayosi et al, the affected family members all presented with ventricular arrhythmias and demonstrated imaging features of ARVC. The study by Turkowski et al, however, described a family with an arrhythmogenic presentation who all showed a cardiomyopathy in the imaging study.119 In desmosomes, desmosomal cadherins (desmocollin and desmoglein) are mainly anchored to the IFs of the cytoskeleton through numerous intracellular Figure 18 Disease mechanisms for arrhythmogenic cardiomyopathy. Cardiomyocyte showing the extracellular matrix, sarcolemma, sarcomere, nucleus, and key proteins that provide structure for ventricular function and cardiac rhythm. See the description of the functions of these proteins in Section 4.1 Desmosomal Defects. Towbin et al Evaluation, Risk Stratification, and Management of ACM e335 protein partners, whereas in fascia AJs, the classical cadherin N-cadherin is primarily anchored to the actin microfilaments of the cytoskeleton and promotes cell–cell adhesion through extracellular associations of its cadherin repeat do-mains.268–270 Interestingly, the protein components of desmosomes and fascia AJs are not mutually exclusive.270,271 In fact, the mechanical junctions of the ID are an admixture of desmosomal and fascia adherens proteins that form a hybrid functional zone, the area composita.269,272,273 Therefore, even if ARVC has been traditionally considered as a desmosomal disease, it is now reasonable to consider that the mechanistic basis of ARVC may extend beyond the strict functional zone of the desmosome, to that of the area composita. Supporting this concept, pathologic variants in CTNNA3 (another gene in the area composita), which encodes for aT-catenin, has also been identified in patients with ARVC who were negative for pathologic variants in the main desmosomal genes.115 a-catenins are natural partners of the cytoplasmic domain of classical cadherins, that is, N- and E-cadherins, and, in the case of N-cadherin, act as its go-between for anchoring to the actin cytoskeleton. The fact that cadherin-2, like its desmosomal cadherin counterparts, is a major player in the ID is also supported by the Cdh2 cardiac-specific mouse model with deletion of N-cadherin in the adult mouse heart causing dissolution of the ID structure, including loss of both desmosomes and AJs, demonstrating that desmosome integrity is also cadherin-2 dependent.274 These mice also exhibited modest, albeit atypical, DCM and spontaneous ventricular arrhyth-mias that resulted in SCD.274 This increased arrhythmic pro-pensity (all mice experienced SCD approximately 2 months after deleting N-cadherin from the heart) was probably due to a reduced and heterogeneously distributed connexin-43, causing loss of functional GJs and partial cardiomyocyte un-coupling and highlighting the prominent role of cadherin-2 in all types of functional junctions in the ID.275 GJ decreases in number and size, with concomitantly increased arrhythmia susceptibility, have also been demonstrated in the context of N-cadherin heterozygous null mice, with 30%–60% of these mice developing VT, suggesting that cadherin-2 hap-loinsufficiency might create an important arrhythmogenic substrate.276 ID remodeling with concomitant reduction of localization of desmosomal proteins, connexin-43, and cadherin-2 has also been demonstrated in ventricular tissues of transplanted hearts of patients with ARVC, further sup-porting the involvement of cadherin-2 in ARVC pathogen-esis.277 4.2. Ion channel defects Cardiac cells are excitable cells that can generate and propa-gate an AP, the electrical signal that induces cardiomyocyte contraction. The cardiac AP is generated by ions moving across the cell membrane that, by depolarization, takes the cell from the resting state to an activated state and then, by repolarization, back to the resting membrane potential.278 All phases of the cardiac AP occur via the synergistic activa-tion and inactivation of several voltage-dependent ion chan-nels. In contractile cardiomyocytes, APs are triggered by the acute entrance of sodium ions (Na1) inside the cell, resulting in an inward current (INa) (SCN5A) that shifts the membrane potential from its resting state (290 mV) to a depolarization state (120 mV). This phase is followed by the efflux of po-tassium (K1) ions through an outward current named Ito, which initiates cell repolarization. This in turn is followed by the plateau phase, a short period of constant membrane po-tential due to the balance between inward calcium (Ca21) currents (ICaL) through the voltage-dependent L-type calcium channels (LTCCs) and time-dependent delayed-rectifier out-ward K1 currents (mainly slow delayed-rectifier IKs [KCNQ1] and rapid delayed-rectifier IKr [KCNH2]). At this point, the Ca21 entry through the LTCC triggers a much larger release of Ca21 from sarcoplasmic reticulum (SR) stores through the ryanodine receptor channel type 2, produc-ing a systolic increase in the intracellular Ca21 needed for cell contraction. Upon LTCC inactivation, the net outward K1 currents repolarize the cell and bring the membrane po-tential to its resting state. The balance between Ca21 and K1 currents therefore determines the AP duration. The basal and acetylcholine-dependent inwardly rectifying K1 currents (IK1 and IKACh) control the final repolarization and determine the resting membrane potential. Ca21 is then extruded from the cell through the Na1/Ca21 exchanger (NCX) type 1 and taken back into the SR through the SR Ca21-ATPase type 2a, thereby restoring low intracellular Ca21 levels, al-lowing cell relaxation during diastole. Pacemaker cells are distinct from other cell types in showing automaticity, a property resulting from both voltage-dependent and calcium-dependent mechanisms.279 The former involves the funny current (If) carried by hyperpolarization-activated cyclic nucleotide-gated chan-nels,280 which have several unusual characteristics, such as activation on hyperpolarization, permeability to sodium and potassium ions, modulation by intracellular cyclic adenosine monophosphate, and a small single-channel conductance. The latter involves spontaneous calcium release from the SR,281 which activates INCX. The crucial role of calcium-dependent mechanisms has been demonstrated in mice with complete atrial-specific knockout of NCX, which has shown no pacemaker activity.282 Both mechanisms result in sponta-neous depolarization responsible for the rising slope of the membrane potential. When the proper ion current flows are disturbed, electrical abnormalities in the form of arrhythmias occur. 4.2.1. SCN5A The SCN5A gene, which encodes the alpha subunit of the voltage-gated sodium channel Nav1.5, is responsible for the inward sodium current (INa).283 This current is the main component of rapid depolarization in cardiomyocytes and is responsible for the AP upstroke, which subsequently initiates the multistep excitation–contraction coupling cascade.284 e336 Heart Rhythm, Vol 16, No 11, November 2019 This periodic depolarization underlies the synchronous and rhythmic contraction of the heart chambers.284 Pathologic variants in genes encoding for ion channel proteins are well-known causes of inherited arrhythmia dis-orders, such as LQTS, short QT syndrome (SQTS), BrS, CPVT, and AV block, to name a few. The involved genes include those encoding for the cardiac sodium channel, po-tassium channels, and calcium channels. In these disorders, pathologic variants in the affected gene result in distur-bance of the function of the encoded ion channel protein, leading to abnormalities in the function of the AP. In several of these genes, pathologic variants can cause a het-erogeneous array of clinical features, at times differing even within the same family. For instance, pathologic var-iants in the cardiac sodium channel gene SCN5A are responsible for LQTS type 3 (LQT3), which develops due to a gain in channel function. On the other hand, path-ologic SCN5A variants also cause BrS, an electrocardio-graphically distinguishable disorder compared with LQT3, which occurs due to a loss of sodium channel func-tion.285 In addition to causing ventricular tachyarrhyth-mias, atrial fibrillation, atrial standstill, and AV block,286 SCN5A is known to cause an arrhythmogenic form of DCM and an arrhythmogenic form of LVNC.287 The 2006 Scientific Statement on “Contemporary defini-tions and classification of the cardiomyopathies,” endorsed by the AHA, placed “ion channel disorders” under the classi-fication of primary genetic cardiomyopathies.288 This deci-sion was largely based on data regarding the role of pathologic variants in genes encoding defective ion channel proteins, governing cell membrane transit of sodium, potas-sium, and calcium ions leading to ion channel-related arrhythmia disorders, including LQTS, SQTS, BrS, and CPVT and the role of these disorders in the development of cardiomyopathies.288 This classification scheme has continued to be evaluated, and the list of overlapping cardiomyopathy–arrhythmia phenotypes has grown over time, with primary and secondary causes of ion channel dysfunction seen in many cardiomyopathies. Pathogenic var-iations in the SCN5A gene, resulting in electrical and struc-tural cardiac remodeling, (ie, arrhythmogenic DCM), was first described in 2003 by Groenewegen et al in a large family with atrial standstill, a rare form of atrial cardiomyopathy.289 At the same time, it was shown that the clinical spectrum of rare SCN5A pathologic genetic variants could be expanded to ARVC and DCM, accompanied by arrhythmias and conduc-tion disorders.131,290 In 2008, new evidence showed that pathologic variants in the SCN5A gene might represent a risk factor for rhythm disturbances in LVNC.291 These disor-ders are inherited as autosomal dominant traits. The fre-quency of SCN5A-mediated cases in patients with ACMs is approximately 2%;292 however, when pathologic variants in the SCN5A and LMNA genes are taken together, they ac-count for up to 5%–10% when considering only patients with DCM with progressive cardiac conduction defects and supraventricular and/or ventricular arrhythmias. Both RV and LV dilation and dysfunction can occur, as can broad and heterogeneous electrical abnormalities, including atrial standstill, progressive AV block, atrial fibrillation, sick sinus syndrome, VT, torsades de pointes, and VF, resulting in arrhythmic sudden death in some cases. SCN5A may also play a role in ACM without having path-ogenic gene variants. In ARVC, it is clear that when patho-logic variants in genes encoding the cardiac desmosome are identified, Nav1.5, which has been shown to co-precipitate with the desmosomal protein PKP2, can be dis-rupted and dysfunctional. The loss of PKP2 expression has been shown to alter the amplitude and kinetics of the sodium current (INa).293 In addition, pathologic variants in PKP2 have been associated with a sodium channelopathy pheno-type, whereas decreased immunoreactive Nav1.5 protein has been detected in the majority of human ARVC heart sam-ples.294 These observations indicate a close functional asso-ciation between Nav1.5 and mechanical junction proteins, which is further supported by the finding that Nav1.5 copre-cipitates with the AJ protein N-cadherin295 and demon-strating the presence of “adhesion/excitability” nodes formed by aggregates of Nav1.5 and N-cadherin.296 Leo-Macias et al described the presence of these adhesion/excit-ability nodes in cardiac myocytes and demonstrated that (1) the AJ protein N-cadherin serves as an attractor for Nav1.5 clusters, (2) the Nav1.5 in these clusters are major determi-nants of the cardiac sodium current, and (3) clustering of Nav1.5 facilitates its regulation by molecular partners.296 Te Riele et al further demonstrated that Nav1.5 is in a func-tional complex with cell adhesion molecules and that a pri-mary Nav1.5 defect can affect N-cadherin biology, resulting in reduced size and density of N-cadherin clusters at the ID.295 The finding that Nav1.5 coprecipitates with the AJ protein N-cadherin demonstrates the link to the junction/ID/desmo-some and supports the hypothesis that sodium channel dysfunction can occur via disruption of binding partners be-ing mutated (ie, supporting the PKP2 and arrhythmia sce-nario). Therapy for this disorder has not been well studied and is not standardized. Pacemakers and ICDs have been used for some individuals with varying outcomes. Pharmaco-logical therapies have been disappointing, and no specific pharmacotherapy has thus far been recommended for these patients. 4.3. Cytoskeletal defects The cytoskeleton is the cell’s basic scaffold in which other sub-cellular components are spatially arranged so as to communi-cate efficiently between the cell’s internal and external environments. In striated muscle cells, the cytoskeleton con-sists of myofibrillar and extramyofibrillar portions. The myofi-brillarcytoskeletoniscomposedofthinandthickmyofilaments and titin filaments, providing the foundation for myocyte contraction and relaxation. The extramyofibrillar cytoskeleton consists of microfilaments, microtubules, and IFs. IFs serve as a scaffold connecting the sarcomere to other organelles (such as mitochondria or the nucleus) to maintain Towbin et al Evaluation, Risk Stratification, and Management of ACM e337 cellular integrity and contribute to mechanotransduction. The sarcomere is tethered to the sarcolemma (the membrane sur-rounding the myofibril) by another cytoskeletal assembly— the costamere. Costameres link the sarcomere to the sarco-lemma via the Z-disc and M-band. Individual heart cells are connected by IDs, which synchronize muscle contraction. The myofibrils are linked to the plasma membrane at the Z-discs via the costameres. There are specific membrane invag-inations (T-tubules) at the Z-disc, which associate with flank-ing SR to the dyad. At the ID, desmosomes and AJs link neighboring cardiomyocytes mechanically, and GJs provide ion channels for intercellular communication. Desmosomes link to the IF cytoskeleton (composed of desmin), whereas AJs anchor actin filaments (the myofibrils). The border of the last sarcomere before the plasma membrane is defined as the transitional junction. The cytoskeletal structure is continually remodeled to accommodate normal cell growth and respond to pathophys-iological cues. The cytoskeleton maintains the structural integrity and morphology of cardiomyocytes. Cytoskeleton components are also involved in a variety of cellular pro-cesses, such as cell growth and division, cell movement, vesicle transport, cellular organelle location and function, localization and distribution of membrane receptors, and cell–cell communications. The cytoskeleton in cardiac myo-cytes is also believed to play an important role in the transduc-tion of mechanical signals, based upon the unique distribution of the extensive cytoskeletal network as well as the juxtaposi-tion of ion channels, signaling transducers, and network mes-sengers. Cytoskeletal modifications and cardiac myocyte remodeling are causally linked to cardiac hypertrophy and failure. Abnormalities in cytoskeletal components not only cause structural defects but also impair mechanotransduction. The cytoskeleton not only interacts with the extracellular ma-trix via transmembrane proteins such as integrins but also reg-isters adjacent Z-discs to one another, to the cell membrane, and to the nuclear envelope through a delicate network. A number of signaling partners bind to the network either directly or via linker proteins. For example, the muscle LIM protein (MLP) gene encodes a muscle-specific cytoskeletal protein interacting with titin and telethonin (T-cap). Studies in genetically engineered mice with targeted ablation of MLP suggest that the titin–telethonin–MLP complex may serve as a stretch sensor in cardiac muscle cells. There is growing interest in examining the role of cytoskeletal compo-nents in ion channel regulation under physiological and pathological conditions. DCM characterized by ventricular dilatation and dimin-ished contractile function accounts for more than 80% of non-HCMs. DCM has a population prevalence of approxi-mately 1 in 500 and is associated with prognostically adverse arrhythmias at initial disease presentation in up to one-third of patients.297 While increased age, male sex, and impaired ventricular function are established arrhythmic risk factors, arrhythmias also occur in patients with no known risk factors. Approximately 20%–35% of DCM cases are familial. Although impaired force generation, energy shortage, and compromised calcium homeostasis could cause DCM, impaired force transmission and/or defective mechanotrans-duction caused by defects in cytoskeletal proteins such as desmin, lamin A/C, a-actin, d-sarcoglycan, dystrophin, pla-koglobin, desmoplakin, MLP, and telethonin appear to be a prevalent mechanism underlying DCM. 4.3.1. Myofibrillar cytoskeleton The myofibrillar cytoskeleton is composed of thin and thick myofilaments of the sarcomere as well as titin filaments, providing the foundation for myocyte contraction and relaxa-tion. The basic unit of a myofibril is called the sarcomere and is defined as the region between two Z-discs. The actin cross-linker protein a-actinin is a classical marker for Z-discs; how-ever, Z-discs house a large number of other cytoskeletal and signaling proteins. The sarcomere, which is the smallest con-tractile unit of striated muscle, has its lateral boundaries defined by the protein-dense Z-discs that cross-link the barbed ends of actin-based thin filaments from adjacent sarcomeres via a-actinin and are bordered by the I-band, the region on either side of a Z-disc that is devoid of myosin-containing thick filaments. The A-band comprises the region extending the entire length of the thick filaments, and the M-band resides at the center of the A-band. The force of muscle contraction occurs when the myosin motor protein attaches to the actin filament and pulls the Z-discs toward the M-band. The sarco-mere is not a static structure and responds to alterations in mus-cle load and injury. Z-discs also serve as an anchor site for the N-terminus of titin and nebulin and nebulette filament systems, making it indispensable for transmitting contractile force. Z-discs anchor the thin filaments, which are composed of actin, tropomyosin, and the troponin complex. Tropomyosin and the troponin complex are crucial for contraction regula-tion at the thin filament level, which is triggered by calcium. The thick filaments are composed of myosin dimers (a myosin consists of a myosin heavy chain and two myosin light chains), which are arranged in bipolar filaments, with the myosin tails making up the central region of the sarco-mere and the head interdigitating with the thin filaments. Myosin-binding protein C is associated with a subset of the myosin heads and contributes to controlling contraction at the thick filament level. The third filament system is called the elastic filaments and consists of titin. Variants in Z-disc–associated proteins are linked to numerous cardiomyopathies and skeletal myopathies.298–301 a-actinin is the predominant Z-disc protein. There are four vertebrate a-actinin genes with overlapping functions; howev-er, only ACTN2 is found in cardiac muscle.302 The N-terminal actin-binding domain is linked to an a-actinin-2 homodimer cross-linking two antiparallel actin filaments of adjacent sarco-meres, forming a flexible tetragonal lattice.303 This lattice is essential for the rigidity that the Z-disc requires to serve as a structural anchor site, while still allowing for the flexibility needed to conform to contractile forces. a-actinin has a myriad of binding partners, with each inter-action serving a distinct role in the production of concerted e338 Heart Rhythm, Vol 16, No 11, November 2019 contractile action. The major Z-disc proteins that interact with ACTN2 include actinin-associated LIM protein, muscle LIM protein, the N-terminus of titin, myotilin, CapZ, Z-band alternatively spliced PDZ-motif protein (ZASP), filamin, a-actinin, and telethonin-binding protein at the Z-disc, myo-palladin, and myopodin.304–306 Independent studies have reported that human variants in the ACTN2 gene are associated with DCM, HCM, idiopathic VF, LVNC, and atrial arrhythmias.307 Filamin protein family members also bind and cross-link actin. There are three filamin proteins: filamin-A (a iso-form), filamin-B (b isoform), and striated muscle-specific filamin-C (g isoform). Filamin-C (g-filamin) is one of the major proteins that serves as a link between the costamere and Z-disc and is involved in signal transduction with integ-rins. Filamin-C functions through interactions with sarco-lemmal muscle cell membrane proteins such as g- and d-sarcoglycans of the dystrophin glycoprotein complex,308 the b1A-subunit of the integrin receptor complex,309 and Z-disc proteins (such as myotilin310 and FATZ309,311,312). An autosomal dominant nonsense variant, p.Trp2710, in the last exon of the human filamin-C gene interferes with its dimerization process and causes filamin-C to aggregate within skeletal muscle fibers, a phenomenon that eventually leads to myofibrillar myopathy.313,314 Many of the proteins within the myofibrillar cytoskeleton have been shown to cause cardiac and/or skeletal myopathy. Review of the details on the patients with pathologic variants in the genes encoding these proteins with disturbance of pro-tein function has demonstrated a significant association with early-onset arrhythmias, conduction system disease, and sud-den cardiac arrest or death, consistent with an arrhythmo-genic form of cardiomyopathy. 4.3.2. ZASP/LDB3 ZASP/LIM domain binding 3 (LDB3) is one of the major components of the Z-disc proteins in cardiac muscle315 and plays an important role in stabilizing the Z-disc structure through its PDZ-mediated interaction with a-actinin-2, the main component of the Z-disc actin cross-linker, and F-actin, the main cytoarchitectural protein of cardiomyocytes.316 Global ablation of the murine ZASP homolog cypher can disorganize the sarcomere and cytoskeleton, leading to se-vere cardiomyopathy and skeletal myopathy in mice and hu-mans,317 whereas cardiac-specific ablation of cypher can cause DCM and SCD.318 The product of SCN5A, the Nav1.5 current, localizes at the cardiomyocyte membrane along the sarcomeric Z-lines via a-actinin-2, thus connecting Nav1.5 to actin filaments.319 ZASP/telethonin contributes to localizing Nav1.5 to the T-tubule membrane at the Z-line, creating a multiprotein complex associated with a-actinin-2. Variants in the ZASP/LDB3 gene have been shown to cause abnormalities in sodium channel function. Vatta et al were the first to describe pathologic variants in ZASP/LDB3 in patients with DCM and LVNC, identifying 6 (6%) of 100 probands screened.126 Pathologic variants in ZASP/LDB3 were identified in 2 families and 4 sporadic cases. Of the 9 familial and sporadic patients affected, 3 had early-onset conduction system abnormalities and ven-tricular arrhythmias, including sinus bradycardia, second-degree AV block, PVCs, VT, intraventricular conduction delay, ventricular bigeminy, and LBBB. Subsequent reports on patients with arrhythmias and conduction disease associ-ated with DCM and LVNC have supported the causative connection with variants in ZASP/LDB3. Arimura et al320 re-ported on a family with 6 affected members who developed DCM between 50 and 69 years of age, consistent with late-onset DCM, 3 of whom died suddenly. Xi et al321 studied one of the original ZASP/LDB3 pathologic variants reported by Vatta et al126 and demonstrated several underlying mech-anisms by which the ZASP-D117N variant (a ZASP/LDB3 variant identified in patients with DCM/LVNC associated with intraventricular conduction delay, ventricular bigeminy, and LBBB) can cause intraventricular conduction delay: (1) ZASP1-D117N can cause loss of function of Nav1.5 in human cell lines, and in neonatal cardiomyocytes; (2) in silico simulation using the Luo-Rudy model showed that the extent of functional disturbances of Nav1.5 caused by ZASP-D117N is sufficient to delay cardiac conduction in human hearts; (3) the interaction between ZASP and Nav1.5 requires preservation of the Z-disc protein complex; and (4) the modification of Nav1.5 by ZASP-D117N occurs without significant disruption of Z-line structures in cardio-myocytes.321 Although Nav1.5 preferentially localizes at the ID via SAP97 and LMs via the dystrophin-associated protein com-plex (2 pools), localization at the T-tubular system also oc-curs.322,323 Upon posttranslational modification, Nav1.5 remains attached to the cytoskeleton linked to multiprotein complexes and stored in subcellular compartments. Nav1.5 is also known to localize at the cardiomyocyte membrane along the sarcomeric Z-lines via a-actinin-2, thus connecting Nav1.5 to actin filaments.319 The study by Xi et al therefore suggests that electrical remodeling may precede anatomical remodeling in DCM/LVNC associated with ZASP with the loss of function of Nav1.5 by the mutated ZASP, occurring without significant disruptionof cytoarchitectural networks.321 This is particularly important in a clinical situation, since patients who carry ZASP-D117N may develop arrhythmias even before manifesting HF symptoms. The loss of function of Nav1.5 by ZASP-D117N appeared to be largely responsible for the conduction delay. More recently, Lopez-Ayala et al reported on a family in which a pathologic variant in ZASP/LDB3 was associated with ARVC.158 The index patient and her first-degree and second-degree relatives underwent a complete clinical evalu-ation. After ruling out pathologic variants in the 5 desmo-somal genes, genetic testing using NGS was performed on the proband, who had a long-standing history of presyncope. The proband experienced syncope associated with sustained VT that required electrical cardioversion to restore sinus rhythm. Her ECG showed complete right bundle branch block (RBBB), with inverted and flat T waves in the Towbin et al Evaluation, Risk Stratification, and Management of ACM e339 precordial leads. Echocardiogram and CMR showed biven-tricular dilation and severe biventricular systolic dysfunction; midwall LGE affecting the LV was also identified. An ICD was recommended. However, the patient died in the oper-ating room during the surgical procedure as a result of an anesthetic complication. The postmortem examination demonstrated extensive fibro-fatty replacement in the RV, extensive fibrosis in the LV, and limited inflammatory patches, consistent with a diagnosis of ARVC. A heterozy-gous pathogenic missense variant in ZASP/LDB3 (c.1051A.G) was identified, and another 6 carriers were identified in her family via cascade screening. Three of these relatives fulfilled the criteria for a definitive diagnosis of ARVC, and another reached a borderline diagnosis. These relatives had symptoms including frequent palpitations, abnormal ECGs that showed inverted T waves in right pre-cordial and inferior leads, signal-averaged ECGs that showed late potentials, 24-hour Holter monitoring studies that showed runs of idioventricular rhythm and ventricular ectopic beats, CMR that showed a dilated RV with severe systolic dysfunction, and normal LV with no LGE. A number of the relatives were started on beta-blockers. On the basis of this family, the authors suggested a direct link between ACM with biventricular involvement and pathogenic variants in ZASP/LDB3. 4.3.3. a-actinin-2 a-actinin-2 (ACTN2) is a prominent member of the Z-disc found in cardiac muscle, has an N-terminal actin-binding domain, and creates a lattice-like structure that is essential for the rigidity that the Z-disc needs to serve as a structural anchor site, while still allowing for the flexibility needed to be responsive to contractile forces.303,324,325 The protein’s primary function is to anchor and crosslink actin filaments in the cardiac Z-disc at the lateral boundaries of the sarcomere.306 The Z-disc provides structural support by tethering the sarcomere to the sarcolemma via the costameres and by anchoring filamentous F-actin, titin, and nebulette.305 As one of the integral Z-disc proteins, a-actinin has a myriad of binding partners, with each interaction serving a distinct role in the production of concerted contractile action.306 The major Z-disc proteins that interact with a-actinin-2 and a-smooth muscle actin (ACTA2) are actinin-associated LIM protein, muscle LIM protein, the N-terminus of titin, myotilin, CapZ, ZASP, filamin, and telethonin-binding protein at the Z-disc, myopalladin, and myopodin. ACTA2 has also been demonstrated to bind phosphorylase-b, an important metabolic enzyme in the Z-disc. Furthermore, there is evidence that ACTN2 directly interacts with cardiac ion channels (such as the potassium ion channels KCNA4 and KCNA5326,327 and the sodium ion channel SCN5A319) and forms a bridge between the cal-cium ion channels CACNA1C and CACNA1D.328 Thus, disruption of ACTN2 may affect the localization and func-tion of cardiac ion channels. The authors speculated that the various clinical presentations of Ala119Thr result from a stochastic disruption of one of the many functional roles of ACTN2. One presentation of ACM was reported by Bagnall et al, who performed exome sequencing on a four-generation fam-ily with idiopathic VF, LVNC, and sudden death and identi-fied a pathologic variant in the ACTN2 gene.329 Clinical evaluation of the family identified marked cardiac phenotype heterogeneity, with some individuals being asymptomatic and others having LVNC, resuscitated cardiac arrest due to idio-pathic VF, DCM, or sudden unexplained death. WES identi-fied an Ala119Thr pathologic variant in ACTN2 that segregated with disease. The 22-year-old female proband pre-sented with syncope and a family history of premature sudden unexplained death (her 25-year-old sister died in her sleep). The proband’s ECG showed sinus rhythm with nonspecific ST-T wave changes, and her echocardiogram and CMR showed prominent LV apical trabeculations with preserved LV systolic function, consistent with LVNC. There were no inducible arrhythmias in the EPS, and her QTc measured 440 ms. The proband was implanted with an ICD. Her father had a history of dyspnea, LBBB, and LV dilation with reduced LVEF of 27%. One of the proband’s female cousins experienced a resuscitated cardiac arrest and her CMR re-vealed normal LV and RV indexed dimensions and function, with no evidence of myocardial fibrosis. The cousin was found to have idiopathic VF and was implanted with an ICD, which subsequently delivered two appropriate shocks. She responded successfully to quinidine therapy. In another report, Girolami et al assessed a large 4-generation Italian family, 18 members of which underwent direct clinical assessment and genetic testing, including the proband.330 Eleven individuals had evidence of autosomal-dominant cardiomyopathy and had variable combinations of 3 distinctive features: regional LV noncompaction with LV hypertrophy, atrial septal defect, and early-onset supra-ventricular arrhythmias and AV block. In most of these pa-tients, frequent premature atrial contractions that developed into atrial fibrillation or flutter represented the initial clinical manifestation. These arrhythmic manifestations were an essential part of the phenotypic spectrum. The onset of sup-raventricular arrhythmias followed a common pattern, initially presenting with very frequent premature atrial con-tractions, proceeding to paroxysmal atrial fibrillation (be-tween 30–50 years of age) and then to permanent atrial fibrillation, requiring a pacemaker due to slow ventricular conduction. Many of the family members were treated with ICDs. The authors suggested that the ACTN2 pathologic var-iants may directly participate in the genesis of familial supra-ventricular arrhythmias. 4.3.4. Filamin-C Filamin protein family members also bind and cross-link actin. There are 3 filamin proteins, with filamin-C (g isoform) the only striated muscle-specific protein. In addition to the N-terminal actin-binding domain, there is a Z-disc e340 Heart Rhythm, Vol 16, No 11, November 2019 localization motif.310 Filamin-C is one of the major proteins that serves as a link between the costamere and Z-disc and is involved in signal transduction with integrins. Filamin-C directly interacts with 2 protein complexes that link the sub-sarcolemmal actin cytoskeleton to the extracellular matrix: the dystrophin-associated glycoprotein complex and the in-tegrin complex. At IDs, filamin-C is located in the fascia ad-herens where myofiber ends reach the sarcolemma, adjacent to the position of desmosomal junctions. Filamin-C functions through interactions with the sarcolemmal muscle cell mem-brane dystrophin-associated glycoproteins (such as g- and d-sarcoglycans308), the b1A-subunit of the integrin receptor complex, and Z-disc proteins (such as myotilin310 and FATZ309,311,312). The participation of filamin-C in the attach-ment of the sarcomere’s Z-disc to the sarcolemma (costa-meres) and to the IDs allows cell-to-cell mechanical force transduction. FLNC pathologic variants have been associated with myofibrillar myopathies, as well as cardiomyopathies. Ortiz-Genga et al studied the FLNC gene using NGS in 2877 patients with inherited cardiovascular diseases,34 with clinical and genetic evaluation of 28 affected families. The authors identified a characteristic phenotype in probands with truncating variants in FLNC, as well as 23 truncating pathologic FLNC variants in 28 probands previously diag-nosed with DCM, ACM, or RCM. The authors also identified 54 pathologic variant carriers among 121 screened relatives. The phenotype consisted of LV dilation (68%), systolic dysfunction (46%), and myocardial fibrosis (67%) in the im-aging test, as well as inferolateral negative T waves, low QRS voltages, and ventricular arrhythmias (82%) in the ECG (33%), with frequent SCD (40 cases in 21 of 28 families). The authors observed no clinical skeletal myopathy. Pene-trance was .97% in carriers over 40 years of age, and there was an autosomal dominant inheritance pattern. Immunohis-tochemical staining of myocardial tissue showed no abnormal filamin-C aggregates in patients with truncating FLNC pathologic variants. Isolated or predominant RV involvement, common with desmosomal pathogenic vari-ants, was not observed. Unlike patients with pathogenic lamin A/C, emerin, or desmin pathogenic variants, these pa-tients had mild and infrequent cardiac conduction abnormal-ities. The authors suggested consideration of prompt implantation of a cardiac defibrillator for affected patients harboring truncating pathogenic variants in FLNC. 4.3.5. Extramyofibrillar cytoskeleton The extramyofibrillar cytoskeleton consists of microfilaments (actin), microtubules, and IFs (desmin). It connects the sarco-mere with the sarcolemma and extracellular matrix through the Z-disc and submembrane cytoskeleton,331–334 thereby ensuring power transmission produced by the sarcomeres. The extramyofibrillar cytoskeleton also provides support for subcellular structures, organizes the cytoplasm, regulates sarcolemma topography, and transmits intercellular and intracellular mechanical and chemical signals. 4.3.5.1. Desmin filaments Desmin is the main IF protein and is deemed necessary for cardiomyocyte structural integrity, the allocation and func-tionality of its mitochondria, the nucleus position, and sarco-mere genesis.334,335 The IFs create a 3-dimensional skeleton covering the entire cytoplasm, enveloping Z-discs, extending from one Z-disc to another. IFs are also involved with other cell organelles, including the SR and the T-tubular system. These desmin filaments extend from the Z-disc to the costa-meres, where they are bound through plectin and dysferlin, extend to the ID, and emerge from the Z-discs of the perinu-clear myofibrils to the nuclear membrane. Pathogenic variants in DES, encoding desmin, have been shown to cause severe skeletal and cardiac muscle diseases with heterogeneous phenotypes. DES variants have also been found in patients with DCM and ARVC. Brodehl et al336 identified two novel variants in DES (p.Ala120Asp [c.359C.A] and p.His326Arg [c.977A.G]) in a family with a broad spectrum of cardiomyopathies, with a striking frequency of arrhythmias and SCDs. In vitro experiments with desmin-p.A120D identified a severe intrinsic filament formation defect causing cytoplasmic aggregates in cell lines and of the isolated recombinant protein. Model variants of codon 120 indicated that ionic interactions contributed to this filament formation defect. Ex vivo analysis of ventricu-lar tissue slices revealed a loss of desmin staining within the ID and severe cytoplasmic aggregate formation, whereas Z-band localization was not affected. The authors proposed that the loss of desmin-p.A120D filament localization at the ID resulted in its clinical arrhythmogenic potential. Berm udez-Jiménez et al more recently demonstrated impaired filament formation and disruption of cell mem-brane integrity in a severe form of arrhythmogenic LV car-diomyopathy due to a DES pathogenic variant, p.Glu401Asp, in a large family.337 Variants in the DES gene result in striated muscle disor-ders characterized by the formation of inclusion bodies, weakening of the desmin cytoskeleton, disruption of sub-cellular organelle organization, and eventually myofibril degradation. These muscle disorders are referred to as desmin-related myopathy or desminopathy and often present in young childhood, with patients experiencing increasing muscle weakness. These disorders are associated with a wide spectrum of clinical phenotypes, even within the same family, and range from scapulopero-neal, limb girdle, and distal myopathic phenotypes with variable cardiac or respiratory involvement to pure cardiomyopathies.338 To date, multiple reports of ACM caused by pathogenic DESvariantshavebeenpublished.DESvariantshavebeenpre-viously reported in conduction disease and cardiomyopathies, in particular cases of DCM,339 and, more recently, in ARVC.169 The first of these, DES pathogenic variant p.N116S, was identified in a 17-year-old patient with ARVC and concomitant subclinical skeletal muscle alterations, and this variant led to an amino acid substitution that in turn led Towbin et al Evaluation, Risk Stratification, and Management of ACM e341 to aggresome formation in cardiac and skeletal muscle.340,341 All other reported ARVC-related DES variants underlie a clin-ically heterogeneous phenotype, frequently associated with muscle abnormalities, including a DES-p.S13F pathogenic variant identified in 39 family members from 8 Dutch fam-ilies169,342 with associated variable skeletal myopathy and a wide spectrum of cardiomyopathies, including 2 patients with ARVC. Another DES variant, p.N342D, was described in patients affected with desmin-related myopathies.343 The as-sociation of this variant with RV cardiomyopathy was also noted in select patients.342,344 A DES-p.P419S variant was identified by exome sequencing in a large Swedish family, showing myofibrillar myopathy and ARVC (ARVC7 locus).122 Berm udez-Jiménez et al described a multigenera-tional family in which approximately 30 family members affected with an ACM phenotype hosted a rare missense path-ogenic variant of the DES gene (c.1203G.C; p.Glu401Asp).337 These members showed that the DES Glu401Asp variant caused the disease in the family, with 100% penetrance and variable expressivity. The phenotype presented itself as an arrhythmogenic phenotype with a high riskof SCD and progressive HF. In 4 of the individuals studied, RV involvement was observed, and 2 had epsilon waves. Fibro-fatty infiltration was identified, predominantly in the LV, and the cardiomyocytes had reduced cellular adhesion, reminiscent of the defect found in ARVC, along with reduced expression of DES and cell–cell junction proteins. 4.4. Sarcomeric defects The cardiac sarcomere is the fundamental contractile unit of the cardiomyocyte. Genetic variants in sarcomere genes are a well-established cause of HCM and, in some cases, can cause familial DCM, LVNC, and RCM.345 Variants in MYBPC3 account for approximately 50% of all genotyped HCM cases, with most being loss-of-function variants, whereas missense variants in MYH7 account for 30% of cases.346,347 Other genes, such as TNNT2, TNNI3, TPM1, ACTC1, MYL2 and MYL3, account for 5% of HCM cases each. A recent study investigating variant excess in cases compared with the Exome Aggregation Consortium control population348 showed variants in MYH7, TNNC1, TNNT2 and TPM1 signif-icantly enriched in patients with DCM.100 Specifically, MYH7 accounts for approximately 3%–4% of familial DCM cases.339 Sarcomere gene variants contribute to cases of LVNC, although most often in phenotypes that include another cardio-myopathy, cardiac malformation and/or reduced ejection frac-tion, with MYH7 variants contributing the most cases.349,350 Other genes encoding sarcomeric and Z-disc proteins have also been identified in individuals with LVNC, including ACTC1, MYBPC3, TNNT2, TPM1, TTN, and LDB3. RCM in childhood can be caused by variants in thin filament genes, TNNT2, TNNI3, and TPM1.351 Thepresenceofasarcomerevariantisassociatedwithworse outcomes in HCM, with patients with sarcomere-positive HCM having poorer survival from major cardiovascular events compared with patients with gene-elusive HCM.347,352 Similarly, a recent study of LVNC cases showed a greater risk of major cardiovascular events in patients with a sarcomere variant compared with those without.353 Using NGS, Wang et al targeted and sequenced 73 genes related to cardiomyopathyin102 patients with LVNC,with63% of path-ogenic variants in sarcomere-encoding genes and 12% in ion channel-encoding genes.354 4.5. Metabolic defects The clinical manifestations of inherited disorders of fatty acid oxidation vary according to the enzymatic defect and can pre-sent as isolated cardiomyopathy (DCM, HCM), sudden death, progressive skeletal myopathy, and hepatic failure ar-rhythmias, which can be a presenting symptom of fatty acid oxidation deficiencies.355 Over a 25-year period, Bonnet et al diagnosed 107 patients with an inherited fatty acid oxidation disorder; arrhythmia was the predominant presenting symp-tom in 24 (22%) of these patients.355 These 24 cases included VT (n 5 15), atrial tachycardia (n 5 4), sinus node dysfunc-tion with episodes of atrial tachycardia (n 5 4), AV block (n 5 6), and LBBB (n 5 4) in newborn infants. The authors observed conduction disorders and atrial tachycardias in pa-tients with defects of long-chain fatty acid transport across the inner mitochondrial membrane (carnitine palmitoyl trans-ferase type II deficiency and carnitine acylcarnitine translo-case deficiency) and in patients with trifunctional protein deficiency. Also, VTs were seen in patients with any type of fatty acid oxidation deficiency. The authors concluded that accumulation of the arrhythmogenic intermediary me-tabolites of fatty acids, such as long-chain acylcarnitines, could be responsible for the development of arrhythmias and that inborn fatty acid oxidation errors may cause unex-plained sudden death or near-miss in apparently healthy in-fants and those with conduction defects or VT. Diagnosis is determined by a serum acylcarnitine profile. Specifically, inborn fatty acid oxidation errors result in metabolite buildup proximal to the enzyme defect and in defi-cient formation of energy-yielding substrates after the block. In the defects downstream from carnitine palmitoyl trans-ferase I, the acylcarnitine that accumulates has detergent prop-erties, which may explain its toxicity. Indeed, amphiphilic lipid metabolite, long-chain acylcarnitine, and lysophosphati-dylcholine accumulate during myocardial ischemia and play a pivotal role in the production of arrhythmias. Incorporation of long-chain acylcarnitine in the sarcolemma elicited electro-physiological anomalies analogous to those seen in acute myocardial ischemia.356 The cellular electrophysiological ba-ses of the proarrhythmic effects of long-chain acylcarnitine appear to be multifactorial. First, reduction of the single-channel conductance of the inwardly rectifying K current by amphipathic lipid metabolites may account for automatic AP discharges from the resting and plateau potentials, leading to VT. Second, retardation of conduction velocities by the decrease in excitatory Na current could produce conduction anomalies and yield to reentry.357 Third, nonesterified fatty acids directly activate voltage-dependent Na currents e342 Heart Rhythm, Vol 16, No 11, November 2019 in cardiac myocytes, inducing cytotoxic calcium overload.358 Finally, amphipathic metabolites can interfere with the GJs and disturb the cell membrane’s lipid-protein interface, thereby impairing GJ channels.359 These toxic effects on ionic currents have not been observed with short- and medium-chain acylcarnitine.356 Systemic primary carnitine deficiency, a carnitine trans-porter deficiency, occurs when free carnitine cannot be freely filtered by renal glomeruli, in which 95% is supposed to be reabsorbed by the renal tubules by a high-affinity carnitine transporter in the cellular plasma membrane. Carnitine is not catabolized in humans, and its only metabolic conversion is through ester formation, with most esterified carnitine excreted in urine. Active carnitine transport from blood into cells is mediated by the same transporter that functions in the kidneys. The carnitine transporter OCTN2 is encoded by the SLC22A5 gene and transports carnitine in a sodium-dependent manner.360,361 Carnitine transporter deficiency is inherited as an auto-somal recessive trait. As a result of its deficiency, carnitine is not reabsorbed in the kidneys, leading to urinary loss and depletion of blood and tissue levels, resulting in severe impairment of long-chain fatty acid oxidation and hypoke-totic hypoglycemia with fasting and stress. Age at presenta-tion can range from infancy to adulthood, but neonatal hypoglycemia and sudden death can occur. Clinical manifes-tations in early-onset disease include chronic or acute skeletal myopathy and cardiomyopathy, typically exacerbated by metabolic decompensation. Untreated heart disease proceeds to DCM with reduced LVEF or mild interventricular septal hypertrophy. Electrocardiographic findings include abnormal T waves, ventricular hypertrophy, and atrial ar-rhythmias. Life-threatening arrhythmias can occur, including NSVT with periods of sinus rhythm and ventricular prema-ture beats, even in the presence of only borderline LV hyper-trophy. Carnitine supplementation is typically administered at a dose of 200 to 300 milligrams per kilogram body weight divided throughout the day. 4.6. Mitochondrial forms The presentation of mitochondrial cardiomyopathy includes HCM, DCM, and LVNC forms,362,363 and the severity can range from asymptomatic to devastating multisystem disease.364 Severe cardiac manifestations include SCD, HF, and ventricular tachyarrhythmia, which can worsen acutely duringa metabolic crisis. Mitochondrial crises are often precip-itated by physiological stressors such as febrile illness and sur-gery and can be accompanied by acute HF. Most patients with neuromuscular symptoms present with normal or slightly elevated creatine kinase levels, a normal electromyogram, and normal results of nerve-conduction studies.365–367 Abnormal liver enzyme levels have been found in up to 10% of patients.365,368 Sensorineural hearing loss occurs in 7%– 26% of patients, and its prevalence increases with age.369,370 Patients with myoclonic epilepsy with ragged red fibers (MERRF) and mitochondrial encephalopathy, lactic acidosis, and stroke (MELAS) should be monitored for the develop-ment of cardiac hypertrophy and DCM. Patients with MERRF can present with myoclonus, generalized convul-sions, cerebellar ataxia, muscular atrophy, and elevated blood lactate and pyruvate levels, as well as ragged red fibers in muscle biopsy specimens. A case series of patients with MERRF and an m.8344A.G variant of mtDNA revealed that early age at onset was the only factor associated with the occurrence of myocardial disease.371 The development of myocardial disease in this cohort was associated with a higher risk of SCD. Patients with MELAS can also present with ragged red fibers in the muscle biopsy; however, unlike patients with MERRF, patients with MELAS have normal early development and start to show symptoms only between 3 years of age and adulthood. Patients with MELAS tend to have short stature, seizures, hemiparesis, hemianopia, and blindness.372 Mitochondrial variants are common causes of myocar-dial LVNC in young children. LVNC is characterized by prominent ventricular trabeculations and deep recesses that extend from the LV cavity to the subendocardial sur-face of the ventricle, accompanied or not by LV dysfunc-tion.373–375 Studies have shown the importance of substrate flexibility in preserving normal cardiac function. In experimental models of pressure overload, failing human hearts have shifted from oxidizing fatty acids (the preferred substrate in the healthy heart) to oxidizing glucose for energy production. This metabolic switch is associated with the downregulation of genes involved in mitochondrial biogenesis and fatty-acid meta-bolism and is mediated by the deactivation of PPAR-a and its activator, PGC-a, which are members of a family of transcriptional coactivators involved in mitochondrial regulation and biogenesis. An increased reliance on glyco-lytic pathways could effectively reduce oxygen consump-tion in the short term; over time, however, reduced oxygen consumption might enable the progression of heart disease by creating an energy-deficient state.376 Experi-mental evidence has shown that elevated fatty-acid flux and fatty-acid oxidation (FAO)-deficient states can be asso-ciated with cardiac dysfunction. Both chronic increases in FAO (as observed in diabetes) and decreases in FAO (as seen in pressure-overload models of HF) can lead to HF.377 Accordingly, energy deficiency can be broadly conceived as both a cause and an effect of HF. The management of mitochondrial disease and cardiomy-opathy is largely supportive. Physicians should be aware that patients can make a remarkable recovery from a severe crisis state. Pharmacological strategies include the use of various dietary supplements. A typical “mitochondrial cocktail” would include coenzyme Q10, creatine, L-carnitine, thia-mine, riboflavin, folate, and other antioxidants, such as vita-mins C and E. Studies have suggested that the use of antioxidants partially improves clinical features.376,378 In contrast, a systematic review by Chinnery et al found no clear evidence to support the use of any supplement in patients with mitochondrial disease.379 Towbin et al Evaluation, Risk Stratification, and Management of ACM e343 The mortality rate can be high for patients with mitochon-drial disease that progresses to a crisis state, such as an acute or subacute multiorgan failure secondary to mitochondrial respiratory chain function that worsens due to fever, illness, stress, medications, or heat; urgent treatment is therefore necessary. Crises that can be associated with severe lactate el-evations and cardiac complications during a crisis include cardiogenic shock, atrial and ventricular arrhythmias, DCM, and SCD. Patients often have baseline acidemia, and the correction of acidosis should be gradual. Oxygenation can worsen the crisis by increasing free-radical production; the partial pressure of oxygen therefore needs to be main-tained between 50 and 60 mm Hg.380,381 Patients with mitochondrial disease who present with fever or who are unable to eat or drink may be administered dextrose-containing intravenous fluids—preferably D10 with half-normal saline content—at a maintenance dose, regardless of blood glucose levels. Their metabolic and volume status should be evaluated periodically. The management of these patients’ cardiac complications, including HF, bradyarrhyth-mias, and tachyarrhythmia, follows the same guidelines as those for the general population. If cardiac dysfunction is noted during a crisis, patients should be closely monitored us-ing serial echocardiography. In selected patients who have advanced HF due to cardiomyopathy, cardiac transplantation may be needed. Three pediatric patients with mitochondrial cardiomyopathy who underwent cardiac transplantation reportedly had excellent early and late outcomes.382 4.6.1. Kearns-Sayre syndrome Kearns-Sayre syndrome (KSS) is a mitochondrial myopathy characterized by the clinical triad of ptosis, chronic progressive external ophthalmoplegia, and abnormal retinal pigmentation and is associated with cardiac conduction defects and DCM, sometimes requiring transplantation.383,384 Approximately 50% of patients with KSS have cardiac involvement, including recurrent syncope, bundle branch block, fascicular block, and nonspecific intraventricular conduction disturbances; 20% of deaths in these patients have been attributed to cardiac causes. In a guidelines publication, the ACC/AHA/HRS assigned a class I recommendation (with a LOE B rating) to pacemaker implantation for third-degree and advanced second-degree AV block at any anatomic level when associated with neuromuscular diseases and AV block. Skeletal muscle histopathology commonly demonstrates rag-ged red fibers. The genetic abnormalities observed in KSS consist largely of single large-scale mitochondrial DNA dele-tions, although mitochondrial DNA point variants, such as m.3249G.A in the tRNA (Leu) gene, m.3255G.A in the tRNA (Leu) gene, and m.3243A.G in the tRNA (Leu) gene, have also been reported.384,385 4.7. Histiocytoid (oncocytic) cardiomyopathy Infantile histiocytoid cardiomyopathy is a rare but distinctive arrhythmogenic disorder characterized by incessant VT, car-diomegaly, and sudden death within the first 2 years of life if left untreated. Approximately 100 histiocytoid cardiomyopa-thy cases have been reported in the literature;386–400 however, the prevalence is likely to be higher, given that many cases of histiocytoid cardiomyopathy could have been misdiagnosed as sudden infant death syndrome.401 Female preponderance is approximately 4:1, with most cases (90%) occurring in girls under 2 years of age, leading to intractable VF or cardiac arrest. The lesion resembles a hamartoma with histiocytoid or granular cell features.400 The condition has clearly been defined as a mitochondrial disorder and affects the function of complexes I and III of the respiratory chain of the cardiac mitochondria.400 The etiology favors either an autosomal recessive gene or an X-linked condition. Histopathological findings in patients with histiocytoid cardiomyopathy include multiple flat-to-round, smooth, yel-low nodules located beneath the endocardial surface of the LV, the atria, and the four cardiac valves. The nodules are composed of demarcated, large, foamy granular cells. Glycogen, lipids, and pigment may be observed in these cells, as well as a lymphocytic infiltrate. Immunostaining shows perimembranous immunoreactivity for muscle-specific actin, but not for the histiocytic markers, S-100 protein and CD69.387,391,398,402,403 These cells may be abnormal Purkinje cells; however, a primitive myocardial precursor cannot be excluded. Radiofrequency ablation or pacemaker implantation may be required to treat arrhythmias.404 Surgi-cal intervention with prolonged survival has been re-ported.405 Shehata et al reported two probands with de novo nonsense variants in the X-linked nuclear gene NDUFB11, which had not previously been implicated in any disease, despite evidence that deficiency for other mitochondrial elec-tron transport complex I members leads to cardiomyopa-thy.406 A third proband was doubly heterozygous for inherited rare variants in additional components of complex I, NDUFAF2, and NDUFB9, confirming that histiocytoid cardiomyopathy is genetically heterogeneous. In a fourth case, the proband with histiocytoid cardiomyopathy inherited a mitochondrial variant from her heteroplasmic mother, as did her brother, who presented with cardiac arrhythmia. A causal role for NDUFB11 truncation in the etiology of histio-cytoid cardiomyopathy helps explain the disease’s female bias. Whereas most complex I deficiencies are thought to be inherited in a Mendelian recessive manner, these two de novo variants establish a dominant haploinsufficient pheno-type. Section 5 Other disorders 5.1. Infiltrative cardiomyopathies: amyloidosis See Evidence Table: ACM Amyloidosis. A recommendation flow diagram is shown in Figure 19. Cardiac amyloidosis refers to the extracellular deposition of low molecular weight proteins within the myocardium, usually occurring in the context of more widespread organ involvement. The amyloid deposits are typically formed by one of two proteins: light chains or transthyretin.407,408 e344 Heart Rhythm, Vol 16, No 11, November 2019 Isolated atrial amyloidosis due to atrial natriuretic peptide deposition typically occurs in older age, and small studies have suggested its role in atrial fibrillation.409,410 Light chain amyloidosis (AL amyloidosis) is secondary to a primary blood dyscrasia, which drives an abnormal proliferation of plasma cells and subsequently the monoclonal overproduction of light chains. Chemotherapy and stem cell transplantation have transformed care and vastly improved survival for AL amyloidosis.411 Transthyre-tin amyloid is composed of a different protein, a misfolded prealbumin that will also produce amyloid fibrils and de-posits in tissues.408 Treatment includes liver transplantation, which can retard progression; the results are variable, howev-er, and advanced multiorgan involvement often prevents curing. Newer therapies to stabilize transthyretin, diminish its production, or remove it from affected organs are currently under investigation.412–416 Cardiac involvement is in the form of an infiltrative car-diomyopathy in addition to HF via primarily diastolic limita-tion; small vessel disease,417 conduction system disease,418–420 and atrial and ventricular arrhythmias421 are all well recognized. Histological evaluation of hearts with cardiac amyloidosis has provided insight into the potential underlying mechanisms of cardiac arrhythmia. Amyloid fibrils infiltrate the extracellular matrix, disrupting myocar-dial cellular arrangement and leading to myocardial fibrosis.422,423 Perivascular amyloid infiltration and impairment of cardiomyocyte function are also well described,424,425 and the subsequent impaired vasoreactivity can result in relative myocardial ischemia and abnormal electrical conduction. This cardiotoxic infiltrative milieu is hypothesized to be the fundamental driver of conduction abnormalities, atrial and ventricular arrhythmias. Although widespread involvement is not Cardiac amyloidosis In both symptomatic and asymptomatic individuals with cardiac amyloidosis and second-degree AV block type II, high-grade AV block or third-degree AV block, a permanent pacemaker is recommended (COR I, LOE B-NR). Second-degree type II, high-grade or third-degree AV block? Atrial arrhythmias? Survived cardiac arrest? In individuals with cardiac amyloidosis who have survived a cardiac arrest, an ICD is recommended if meaningful survival greater than 1 year is expected (COR I, LOE C-EO). AL-type with nonsustained ventricular arrhythmias? In individuals with cardiac amyloidosis, the use of digoxin may be considered if used with caution due to the high risk of toxicity (COR IIb, LOE B-NR). In individuals with cardiac amyloidosis and symptomatic atrial arrhythmias, the use of sotalol, dofetilide, or amiodarone may be considered (COR IIb, LOE C-EO). In individuals with AL-type cardiac amyloidosis with nonsustained ventricular arrhythmias, a prophylactic ICD may be considered if meaningful survival greater than 1 year is expected (COR IIb, LOE B-NR). In individuals with cardiac amyloidosis and symptomatic atrial arrhythmias, cardiac ablation may be considered (COR IIb, LOE C-LD). Yes Yes Yes Yes Figure 19 Amyloidosis arrhythmia treatment recommendations. AL 5 amyloid light-chain; AV 5 atrioventricular; COR 5 Class of Recommendation; ICD = implantable cardioverter defibrillator; LOE 5 Level of Evidence. Colors correspond to COR in Figure 1. Towbin et al Evaluation, Risk Stratification, and Management of ACM e345 uncommon, with sinus node dysfunction well-recognized,426–428 infranodal conduction system disease appears to be the primary conduction abnormality, as evidenced by HV interval prolongation.418,429 The disease is associated with the risk of sudden death in a number of cohorts.418 Due to the progressive amyloid deposition throughout the heart, sinus node dysfunction and conduction disease often worsen, prompting the consideration for per-manent pacemakers. For those patients for whom permanent pacing is necessary, lead placement should be carefully considered, given the potential for further LV depression related to RV pacing dyssynchrony. Currently, there are no studies that can provide definitive guidance on this issue. Autonomic dysfunction with orthostatic presyncope or syncope is commonly observed in patients with systemic am-yloid disease and cardiac involvement, and peripheral vaso-constrictors are frequently needed to manage symptoms.430–432 A clear conduction abnormality needs to be considered as the etiology in these patients, recognizing that most cases of SCD are likely related to infranodal conduction disease. Furthermore, significant cardiac involvement with advanced infranodal conduction abnormalities can often be masked by a normal-appearing QRS complex.418,429 By further blocking AV nodal conduction by preventing compensatory physiological heart rate recovery and directly preventing vasoconstriction, the actions of calcium blocking agents converge to create a malignant and potentially lethal combined effect. Evidence is limited, however, to small case series and 2 case reports.433–435 The most common tachyarrhythmia in this disorder is atrial arrhythmia. Rate control using AV nodal blocking agents can be especially challenging in the face of the rela-tive hypotension and impairment in compensatory vaso-reactivity that is commonly seen with widespread systemic and autonomic impairment. AV nodal ablation has been evaluated and appears to be a reasonable consid-eration in more resistant and symptomatic cases.436 Antiar-rhythmic approaches are often necessary, given that maintenance of active atrial systole can be imperative for patients with restrictive LV filling; however, extensive am-yloid infiltration, when present, could impair atrial systole. Extensive substrate abnormalities, presumably related to extensive atrial amyloid fibril infiltration, are common, and results from atrial fibrillation ablation are less than ideal.429,436 Frequent ectopy with NSVT is the most common ventricular dysrhythmia, yet neither burden of ectopy nor NSVT appears to predict SCD.437 Whether ICDs improve survival is not clear,438–442 and progressive HF and terminal pulseless electrical activity remains a common theme associated with cardiac death in this group. This situation may be different for patients with cardiac amyloidosis who have been successfully managed for AL-type disease421 and for patients awaiting cardiac transplantation; individualized approaches are therefore necessary. Patients with cardiac amyloidosis remain at high risk for developing intracardiac thrombus and thromboembolic stroke;443,444 anticoagulation needs to be carefully considered even in the absence of atrial arrhythmias. COR LOE Recommendations References I B-NR In both symptomatic and asymptomatic individuals with cardiac amyloidosis and second-degree AV block type II, high-grade AV block or third-degree AV block, a permanent pacemaker is recommended. 418,426,427, 445 AV block has been consistently linked to sudden death in patients with cardiac amyloidosis; for patients with obvious conduction system abnormalities, pacemaker implantation is recommended. COR LOE Recommendations References I C-EO In individuals with cardiac amyloidosis who have survived a cardiac arrest, an ICD is recommended if meaningful survival greater than 1 year is expected. It is not known how many patients in the secondary prevention ICD trials had underlying cardiac amyloidosis. Nevertheless, there is agreement that patients who have been resuscitated following a cardiac arrest are at higher risk of recurrence and can potentially be revived by defibrillation.3 COR LOE Recommendations References IIb B-NR In individuals with cardiac amyloidosis, the use of digoxin may be considered if used with caution due to the high risk of toxicity. 446 Digoxin is known to bind to amyloid fibrils and putatively this action can potentiate its effect on the myocardium. In addition, many patients with cardiac amyloidosis have dysfunction related to the same disease process, and serum digoxin levels can be affected by the reduced excretion. In a cohort of 107 patients with AL amyloidosis who received digoxin, the incidence of significant arrhythmias due to digoxin toxicity was 11%, and 5 patients died.446 e346 Heart Rhythm, Vol 16, No 11, November 2019 5.2. Brugada syndrome Since the initial clinical description of BrS, there has been a search for structural abnormalities in patients with the Brugada phenotype, which has been challenging to prove unequivo-cally. Simple imaging with transthoracic echocardiography is typically normal for patients with BrS, but the technique clearly lacks the ability to image this relevant heart region (ie, the RVOT area) with meaningful resolution. However, echocardiographic studies have demonstrated delayed activa-tion of the RV, in which the degree of delay correlated well with the degree of ST-elevation.447 However, higher-resolution CT and CMR have consistently revealed structural abnormalities and enlarged ventricular volumes,448,449 which could be particularly relevant in patients with SCN5A-mediated BrS.450 The potential contribution of structural ab-normalities has taken on renewed interest with the advent of epicardial mapping and ablation451 and recent preliminary his-topathologic data from individuals with the Brugada pheno-type and sudden death.452 Several groups have performed endomyocardial biopsies of patients with BrS, which have yielded mixed results, from findings of lymphocytic infiltrates to severe fibro-fatty infiltration suggestive of ARVC.453–455 Frustaci et al examined 18 consecutive symptomatic patients with BrS with endomyocardial biopsy of both ventricles, finding evidence of abnormalities in all patients.455 Histopathology was subsequently shown to be heterogeneous in a subsequent study in 2008, whereby nonspecific lymphocytic changes in the biopsies of 21 patients with BrS could not be classified into any pathognomonic pattern.454 In a recent evaluation of 6 postmortem hearts from presumed BrS-related sudden death, epicardial surface, interstitial fibrosis, and reduced GJ expression were observed in the RVOT.452 Fibrosis and reduced GJ expression colocalized with abnormal potentials from previous epicardial mapping studies. These observa-tions correlate with the previous observation that ablation of epicardial scar potentials attenuates and may even abolish the Brugada phenotype and life-threatening arrhythmias.451 Abnormal myocardial structure and conduction are therefore likely to be at least partially responsible for the development of the Brugada phenotype.456 5.3. Potassium channels: KCNQ1, KCNH2, and TRMP4 5.3.1. KCNQ1 Xiong et al identified a 60-year-old man who initially pre-sented with episodes of palpitations and was found to have recurrent VT with LBBB morphology on a 12-lead ECG, COR LOE Recommendations References IIb C-EO In individuals with cardiac amyloidosis and symptomatic atrial arrhythmias, the use of sotalol, dofetilide, or amiodarone may be considered. Although not studied in a retrospective or prospective manner, atrial arrhythmias are common, often highly symptomatic, and poorly tolerated, mostly due to rapid ventricular rates and irregular ventricular response that impair ventricular filling and contractility. Patients with significant ventricular diastolic disease can also present with symptomatic deterioration in the context of impaired filling without atrial systole, and antiarrhythmic agents are typically required. The class III antiarrhythmics (sotalol, dofetilide, and amiodarone) are mechanistically more suitable for therapy for this patient group, given the preponderance of atrial and ventricular myocardial fibrosis or scarring and the risk of atrial flutter and reentrant ventricular arrhythmia with class Ic agents. The use of class Ic agents can result in persistent atrial flutter in this patient group, which frequently exhibits substrate-related atrial tachycardias. COR LOE Recommendations References IIb B-NR In individuals with AL-type cardiac amyloidosis with nonsustained ventricular arrhythmias, a prophylactic ICD may be considered if meaningful survival greater than 1 year is expected. 421 Primary prevention ICD implantation remains controversial, and there are conflicting data on the prevention of SCD in cardiac amyloidosis. Potentially curative therapies have emerged to manage certain subtypes,421 and AL amyloidosis could be more favorable in this regard. Patients awaiting heart transplantation are also being considered for disease cure and should likely also be considered independently. COR LOE Recommendations References IIb C-LD In individuals with cardiac amyloidosis and symptomatic atrial arrhythmias, cardiac ablation may be considered. 436 It is important for clinicians to recognize that ablation for atrial arrhythmias has limited efficacy and high recurrence rates, even when performed in major referral centers. Patients with rapid ventricular rates and those resistant to medical therapy also appear to benefit symptomatically from combined AV nodal ablation and permanent pacemaker implantation. In a cohort of 26 patients, 13 of whom underwent catheter ablation for atrial arrhythmia (atrial fibrillation, atrial flutter, or atrial tachycardia), the 1- and 3-year recurrence-free survival rate was 70% and 60%, respectively. The remaining 13 patients underwent AV node ablation. Both ablation groups had improved symptoms, and 11 patients died during the study period.436 Towbin et al Evaluation, Risk Stratification, and Management of ACM e347 frequent ventricular ectopy, and runs of NSVT on 24-hour Holter monitoring during the initial evaluation, with no fam-ily history of SCD, cardiac arrhythmias, or HF.457 An echo-cardiogram showed an enlarged LV with mildly depressed LV systolic function with an ejection fraction of 45%. He had no obstructive coronary lesions on coronary angiography and subsequently underwent radiofrequency catheter abla-tion of the VT and ICD implantation and was administered a beta-blocker. Follow-up echocardiograms showed persis-tent LV dilation and systolic dysfunction and an LVEF of 42%. A KCNQ1 p.R397Q pathologic variant, which was pre-dicted to be disease-related, was identified at the C-terminal domain of the KCNQ1 channel protein. The KCNQ1-R397Q variant was located in the C-terminal domain of the a-subunit of the functional KCNQ1 channel complex, which is considered an interacting domain necessary for the assem-bly of the channels at the membrane.458 Tail current density and peak tail current density at 170 mV were significantly reduced in cells expressing the mutant protein, and localiza-tion of the mutant KCNQ1-R397Q protein to the cell mem-brane was reduced as compared with the KCNQ1-WT protein, all consistent with loss of function of KCNQ1. Loss-of-function variants in the KCNQ1 gene are known to cause LQTS type 1 (LQT1), whereas a gain-of-function variant causes sinus bradycardia, familial atrial fibrillation, SQTS, and sudden infant death syndrome.459–463 A 12-lead ECG in the index case with the KCNQ1-R397Q pathogenic variant showed a QTc interval of 480 ms in the presence of a severe intraventricular conduction defect. The clinical phenotype, which is distinct from the classic LQT1, is consis-tent with the loss-of-function effect of the KCNQ1-R397Q variant on trafficking of the KCNQ1 protein to the membrane and decreased IKs tail current density. The KCNQ1-R397Q variant was also identified in a 21-year-old female victim of SCD, whose cardiac autopsy demonstrated myocyte hy-pertrophy, disarray, fibrosis, and fatty replacement, a pheno-type reminiscent of ACM.464 In addition, Kharbanda et al presented genetic and pheno-typic data from 4 family members across 2 generations with evidence of prolonged QT interval and LVNC in association with a pathogenic variant in KCNQ1.465 The association of LQTS LVNC is uncommon, with only 1 reported case in as-sociation with a pathogenic KCNQ1 variant. In this case, a 5-year-old girl suffered a cardiac arrest and was found to have LVNC and prolonged QTc, and a previously reported patho-genic KCNQ1 variant (c.1831G . T, D611Y), located in the C-terminus of KCNQ1. Several members of her family were found to carry this variant, but none had detected ECG or echocardiographic abnormalities.466 5.3.2. KCNH2 Two cases of LQTS and LVNC have also been reported by Ogawa et al, with both patients having different KCNH2 var-iants.467 SCD has occurred in these types of patients but has not been commonly reported. The optimal therapy is unclear at this time, although beta-blocker therapy has been success-ful in treating KCNQ1- and KCNH2-associated LQTS. ICD implantation has been used for patients with this form of LQTS who experienced an episode of sudden cardiac ar-rest.465 5.3.3. TRPM4 The transient receptor potential melastatin 4 (TRPM4) chan-nel mediates a Ca21-activated nonselective cationic current (INSCca).468–470 In the heart, the TRPM4 channel represents the cardiac Ca21-activated transient inward current (Iti) and plays a key role in the cardiac conduction system. At negative membrane potentials, TRPM4 channels catalyze Na1 entry into the cell, leading to cellular membrane depolarization. At positive membrane potentials, TRPM4 channels can catalyze cellular K1 efflux, leading to membrane repolarization. TRPM4 activity can therefore reduce or increase the driving force for Ca21. The potential influence of TRPM4 on the driving force of Ca21 has an important impact on the frequency of intracellular Ca21 oscillation in T-cells471 and HL1-mouse cardiomyocytes.472 Inhibition of TRPM4 channels in these cells abolishes the Ca21 oscillations and leads to a phasic concentration of intra-cellular Ca21. TRPM4 is expressed in many cell types but is expressed most abundantly in the heart,468 where it may participate in intracellular Ca21 sensing and affect cellular excitability by influencing the membrane potential in all cell types. The impact of TRPM4 downregulation or upregu-lation depends on cell type and the presence of other ion channels, as well as exchangers and transporters. Dominantly inherited variants in the TRPM4 gene of 4 families were shown to be associated with the cardiac bundle-branch disorder progressive familial heart block type I (PFHB1), isolated cardiac conduction disease (ICCD),473,474 AV conduction block, RBBB, bradycardia, and BrS.475,476 TRPM4 channels carrying PFHB1 and ICCD variants display a dominant gain-of-function phenotype, which is not associated with alterations in biophysical properties but with an increase in TRPM4 current density.473,474 Daumy et al477 reported on the genetic screening of 95 un-related patients with progressive conduction system disease and identified 13 individuals with pathologic variants in the TRPM4 gene. One variant was found in a 4-generational family; systematic familial screening showed that there were 96 family members, 57 of whom could be recruited and studied. Twelve patients were diagnosed with conduc-tion defects, 6 of whom (50%) underwent pacemaker implan-tation. Ten of the 12 patients presented with RBBB, 8 of whom showed left anterior hemiblock. Functional and biochemical analyses demonstrated that this variant, TRPM4-p.I376T, results in increased current density concomitant with augmented TRPM4 channel expression at the cell surface. LVNC was also identified in one of the fam-ily members. The affected patients were 34 6 25 years of age; however, babies, children, and adolescents were affected e348 Heart Rhythm, Vol 16, No 11, November 2019 as well. Almost no information regarding the patient with LVNC was provided, except that she had been diagnosed as a baby with LVNC, RBBB, and left anterior hemiblock, and had been implanted a pacemaker. Using a custom gene panel consisting of 115 genes known to be associated with cardiomyopathic phenotypes and chan-nelopathies, Forleo et al478 analyzed 38 unrelated patients: 16 with DCM, 14 with HCM, and 8 with ARVC, recruited on the basis of more severe phenotypes and a family history of cardiomyopathy and/or sudden death. In 23 of 38 patients, at least one novel potential gene–phenotype association was identified. In the case of ACM, the authors found 1 patient with asymptomatic DCM and a N915D-TRPM4 pathologic variant with a family history of sudden death in 3 of 4 affected family members. The authors also identified an E289K-TRPM4 pathologic variant in a patient who pre-sented with resuscitated cardiac arrest due to VF, an initial ECG with inverted T waves from V1 to V3, and subsequent features of first-degree AV block, NSVT, paroxysmal atrial fibrillation, a 2D echocardiogram demonstrating a dilated RV, and a CMR that demonstrated dyskinetic areas at the free and inferior walls of the RV. The patient under-went ICD implantation. A V1185I-TRPM4 pathologic variant was identified in the patient, who also had a family history of sudden death occurring in 3 of 4 affected family members. Therapy in this patient cohort included pacemaker implanta-tion and, in some cases, an ICD.477 Saito et al also identified a TRPM4 pathogenic variant in patients with ventricular noncompaction and cardiac conduc-tion disease, thereby further expanding the role of TRPM4 abnormalities in ACM.479 Management of cardiomyopathy also needs to be taken into account, using standard therapy. 5.4. Phospholamban Phospholamban, which is encoded by the PLN gene, is a transmembrane phosphoprotein of SR and is a key regulator of calcium homeostasis.129,480 Pathogenic gene variants in PLN, mostly leading to the inhibition of calcium uptake into the SR, can cause genetic forms of cardiomyopathy, particularly those associated with early-onset of rhythm disturbance.480,481 The pathogenic PLN R14del gene variant is commonly identified in patients diagnosed with ACM who have been initially diagnosed with DCM or ARVC.481, 482 In the Netherlands, the PLN R14del pathologic variant is a founder variant and has been identified in 10%–15% of patients diagnosed with ACM, either arrhythmogenic DCM or ARVC.33,483 The phenotype of PLN R14del variant carriers, obtained from a limited number of index patients and family members, is characterized by a low-voltage ECG, a high frequency of malignant ventricular arrhythmias, and end-stage HF.33,481,482 Natural history insights into this inherited disorder, including onset, risk stratification for malignant ventricular arrhythmias, mortality, and prevention of SCD, which require large, unselected multicenter cohorts consisting of index patients and relatives, are difficult to identify; however, a number of studies have attempted to do so.80,161 The yield from screening cardiomyopathy populations for pathologic PLN variants is generally very low, ranging from 0.08%–0.38% in selected cohorts.481,484 The PLN R14del pathogenic variant was identified in 13% (31 of 240) of Dutch patients diagnosed with DCM and in 12% (12 of 97) of Dutch patients diagnosed with ARVC.33 The arrhythmogenic burden of the PLN R14del pathogenic variant was demonstrated by the high rate of appropriate ICD discharges and a positive family history of SCD. Additionally, PLN R14del pathogenic variant carriers more frequently underwent cardiac transplantation compared with patients with familial DCM.33 Cascade screening has identified many family members carrying the same pathogenic variants. Variable expression and age-dependent penetrance are characteristics observed with the PLN R14del pathogenic variant. Sepehrkhouy et al evaluated the distribution pattern of cardiac fibrosis in hearts with desmosomal vs PLN R14del pathogenic variant cardio-myopathy and compared this pattern with fibrosis in other hereditary cardiomyopathies,485 demonstrating that cardio-myopathies associated with desmosomal or the PLN R14del pathogenic variant have a distinct fibrosis pattern. The posterolateral wall of the LV was particularly discriminating, and hearts with the PLN R14del pathogenic variant cardiomy-opathy showed significantly more fibrosis in the LV free wall than those with pathogenic desmosomal variants. Both desmosomal and PLN R14del pathogenic variants were strongly associated with life-threatening ventricular arrhyth-mias. Patients with pathogenic desmosomal variants had RV fibro-fatty changes and fibrosis with fatty changes in the outer part of the LV wall, predominantly in the posterolateral part, in line with earlier observations in autopsy studies from patients with ACM with unknown genotypes486 and in trans-genic mouse models of desmosomal ARVC.487 LV pathology confirmed the LGE studies of CMR that typically involve the subepicardial and midwall layers of the inferolateral region of the LV in ACM.488–490 Hearts from patients with a PLN R14del pathogenic variant also had a pattern of RV fibro-fatty replacement and LV fibrosis with fatty changes mostly in the posterolateral wall, regardless of clinical presenta-tion.491,492 However, hearts with the PLN R14del pathogenic variants had significantly more fibrosis in the LV and less fat in the RV compared with hearts with desmosomal variants. Recent observations were also confirmed from a cohort of 153 Dutch patients with ACM and in a combined United States and Dutch cohort of 577 patients in which more LV involvement in patients with PLN pathogenic variants was observed than in those with desmosomal pathogenic variants using electrocardiographic and imaging criteria (echocardiography, CMR, RV/LV cine-angiography).141,493 The distribution in fibrosis patterns suggested that different variants could make the cardiomyocyte vulnerable to different stressors with potential damaging mechanisms that are not evenly distributed over the various regions of the myocardium. The authors speculated that the pattern of predominantly RV and Towbin et al Evaluation, Risk Stratification, and Management of ACM e349 LV (posterolateral) epicardial fibrosis or fibrofatty replacement is induced by increased sensitivity to wall stress on the heart. This is supported by the demonstration that exercise induces a 125% increase in end-systolic wall stress in the RV, compared with only 14% in the LV,494 sug-gesting that the RV is more vulnerable to wall stress. Following the arrhythmogenic profile of the PLN R14del pathogenic variant, primary prevention by implanting an ICD could be beneficial for variant carriers.33,481,482 5.5. Left ventricular noncompaction See Evidence Table: Left Ventricular Noncompaction. Recommendation flow diagrams are shown in Figure 20 and Figure 21. LVNC is a genetic disorder characterized by excessive and unusual trabeculations within the LV, which is thought to occur due to developmental arrest and failure of the heart to fully form the compact myocardium during the final phase of cardiac development.288,495 Genetic inheritance arises in at least 30%–50% of patients and is thought to occur at a rate of approximately 1 case per 7000 live births.496,497 LVNC is characterized by a spongy morphological appearance of the myocardium occurring primarily in the LV, with abnormal trabeculations typically being most evident in the apical, mid-lateral, and inferior portions of the LV.498–500 The RV can also be affected, causing RV noncompaction or biventricular noncompaction.499,501 The LV myocardium comprises 2 distinct layers, a compact and a noncompact layer, along with prominent trabeculae and deep intertrabecular recesses.495,497 Apical thinning of the compact layer is also typical. These features may be associated with normal ventricular chamber dimensions, wall thickness and function, LV dilation or hypertrophy, systolic and/or diastolic dysfunction, atrial enlargement, In individuals with the clinical diagnosis of pathologic LVNC, genetic counseling and genetic testing are reasonable for diagnosis and for gene-specific targeted cascade family screening (COR IIa, LOE B-NR). If the proband has a disease-causing gene variant, it is recommended that first-degree relatives of individuals with LVNC undergo clinical screening for the disease along with genetic counseling and genetic testing (COR I, LOE B-NR). In individuals with suspected LVNC and ventricular arrhythmias, CMR or other advanced cardiac imaging may be reasonable for establishing a diagnosis and for risk stratification (COR IIb, LOE B-NR). Diagnosis of LVNC Genetic evaluation of LVNC Ventricular arrhythmias? Patient suspected of LVNC? In individuals with suspected LVNC, the diagnostic criteria by echocardiography or CMR, measured as the maximal ratio of noncompaction to compaction (NC/C), may be reasonable for establishing a diagnosis (COR IIb, LOE B-NR). Yes Clinical diagnosis of LVNC? Disease-causing variant present? Yes Yes Yes A B Figure 20 Diagnosis and risk stratification of left ventricular noncompaction (LVNC) (A) and family and genetic evaluation of LVNC (B). CMR 5 cardiac magnetic resonance imaging; COR 5 Class of Recommendation; LOE 5 Level of Evidence; NC/C 5 maximum noncompaction to compaction ratio. Colors correspond to COR in Figure 1. e350 Heart Rhythm, Vol 16, No 11, November 2019 various forms of congenital heart disease, or arrhythmias. Noncompaction cardiomyopathy is therefore phenotypically heterogeneous and can be subclassified into 9 different forms, including the most benign form (in which the LV size, thickness, and systolic and diastolic function are normal, with no associated early-onset arrhythmias), an RV form, a biventricular form, a DCM form, an HCM form, an RCM form, a mixed form (combination of HCM and DCM or DCM and RCM), a congenital heart disease form, and an arrhythmogenic form.11,499 The more severe phenotypes are most typically observed in children, especially those younger than 1 year of age. High-resolution cardiac imaging, such as with CMR, has improved the ability to find the most benign form. Focal LVNC was observed in at least 1 LV myocardial segment in 43% of participants without heart disease or hypertension in a United States population-based CMR study and in 2 segments in 6% of this cohort.502 These findings were repli-cated in a CMR study from a population cohort from the United Kingdom, in which 14.8% of individuals met at least 1 criterion for LVNC, and 4.4% met the most specific criterion.503 The myocardium in LVNC can change unexpectedly from one form to another (“undulating pheno-type”).504 Although many patients are asymptomatic, LV or RV failure commonly occurs and causes HF symptoms, which can be exercise-induced or persistent at rest. Patients undergoing long-term treatment sometimes present acutely with decompensated HF. Other life-threatening risks include ventricular arrhythmias and AV block, which can present clinically as syncope or sudden death.499 Typically, rhythm abnormalities occur early in the presentation in some patients, most commonly being observed at the time of the initial diagnosis, consistent with an ACM. LVNC oc-curs in newborns, young children, adolescents, and adults, with the worst reported outcomes observed in infants and in those in the third and fourth decades of life. In some fam-ilies, a consistent LVNC phenotype is observed in affected relatives; quite commonly, however, individuals with fea-tures of LVNC are found in families in which other affected relatives have been diagnosed with typical HCM, DCM, RCM, or ACM. Variants in approximately 15 genes have been implicated as causing noncompaction cardiomyopathy and include genes encoding desmosomal (desmoplakin and plakophilin 2), cytoskeletal, sarcomeric (most common), Anticoagulation is recommended in individuals with LVNC with atrial fibrillation and in those with previous embolic events (COR I, LOE B-NR). YES Ventricular tachyarrhythmia with syncope or resuscitated from sudden death? ICD implantation is reasonable in individuals with LVNC and evidence of nonsustained VT associated with a reduced ejection fraction (COR IIa, LOE B-NR). Atrial fibrillation or prior embolic event? Individuals with LVNC Nonsustained VT with a reduced ejection fraction present? Ventricular dysfunction present? Anticoagulation may be reasonable in individuals with LVNC with evidence of ventricular dysfunction (COR IIb, LOE B-NR). No Yes Yes ICD implantation is recommended in individuals with LVNC and evidence of ventricular tachyarrhythmias associated with syncope or resuscitated sudden death if meaningful survival greater than 1 year is expected (COR I, LOE B-NR). Yes Yes Figure 21 Left ventricular noncompaction (LVNC) treatment recommendations. Anticoagulation refers to vitamin K antagonists and direct oral anticoagu-lants. Children are often administered aspirin. COR 5 Class of Recommendation; ICD 5 implantable cardioverter defibrillator; LOE 5 Level of Evidence; VT 5 ventricular tachycardia. Colors correspond to COR in Figure 1. Towbin et al Evaluation, Risk Stratification, and Management of ACM e351 and ion channel proteins. Disrupted mitochondrial function and metabolic abnormalities also have a causal role.353,354,505–508 Treatment focuses on improving cardiac efficiency and reducing mechanical stress in those patients with systolic dysfunction. Arrhythmia therapy and ICD implantation to prevent sudden death are the mainstays of treatment when deemed necessary and appropriate.509 LVNC can be associated with a malignant course in children or adults, and risk stratification is lacking.499,505,510 Patients with LVNC associated with arrhythmias with or without systolic or diastolic dysfunction should avoid endurance exercise and competitive sports. 5.5.1. Diagnostic methods and criteria 5.5.1.1. Noninvasive imaging Echocardiography has been the diagnostic imaging technique of first choice, with CMR more recently becoming the diag-nostic gold standard. The typical diagnostic criteria for echo-cardiography and CMR rely mainly on the ratio of the noncompacted layer to the compact layer thickness, evidence of intertrabecular recesses filled from the LV cavity by color Doppler echocardiography, and segmental localization of hy-pertrabeculation diagnostic of noncompaction. The ability of CMR to identify the presence and extent of LGE as a surro-gate marker of myocardial fibrosis is also employed to deter-mine the extent of LV scarring (which has been significantly related to ECG abnormalities and tachyarrhythmias) and LV dysfunction. In patients with LVNC evaluated by CMR, the degree of LV trabeculation had no prognostic effect over and above LV dilation, LV systolic dysfunction, and the presence of LGE.511 5.5.1.2. Electrocardiography Normal electrocardiographic results are rare in LVNC, with 80%–90% of ECGs being abnormal. Infants and young children commonly have excessive voltage, pre-dominantly in the anterolateral leads.512 These individuals, particularly those with early childhood presentation of LVNC, may have associated pre-excitation as well. Arrhythmias (including supraventricular tachycardia, VT, and atrial fibrillation/flutter) are common and dangerous accompaniments in LVNC. Conduction system abnormal-ities also occur. In the systematic review by Bhatia et al,513 most arrhythmias in patients with LVNC were VT and atrial fibrillation, with the prevalence of VT ap-proaching 40% and SCD resulting in more than 55% of LVNC-related deaths. Brescia et al510 reported on the evaluation of 242 children with isolated LVNC and noted that 31 (12.8%) died, 150 (62%) presented with or devel-oped cardiac dysfunction, and 13 (5.4%) underwent trans-plantation. The presence of cardiac dysfunction was strongly associated with mortality (HR: 11; P ,.001). ECG abnormalities were observed in 87% of the patients, with ventricular hypertrophy and repolarization abnormal-ities occurring most commonly. Repolarization abnormal-ities were associated with increased mortality (HR: 2.1; P 5 .02). Eighty (33.1%) children had an arrhythmia, and those with arrhythmias had increased mortality (HR: 2.8; P 5 .002), with 42 (17.4%) having VT and 5 pre-senting with resuscitated SCD. In total, there were 15 cases of SCD in the cohort (6.2%). Nearly all patients who suddenly died (14 of 15) had abnormal cardiac di-mensions or cardiac dysfunction and early-onset arrhyth-mias. The authors concluded that the mortality rate in children with LVNC has a strong association with arrhythmia development, with preceding cardiac dysfunc-tion or ventricular arrhythmias associated with increased mortality. Muser et al studied 9 patients (mean age of 42 6 15 years) diagnosed with LVNC and ventricular ar-rhythmias, including 3 with VT and 6 with frequent PVCs despite treatment with a mean of 2 6 1 antiar-rhythmic drugs.514 The authors conducted EPSs and iden-tified ablation sites using a combination of entrainment, activation, late or fractionated potential ablation, and pace mapping. Eight (89%) patients showed LV systolic dysfunction, with a mean ejection fraction of 40% 6 13%. Patients who presented with VT had evidence of abnormal electroanatomic substrate involving the mid to apical segments of the LV, which matched the noncom-pacted myocardial segments identified by CMR or echo-cardiography prior to the procedure. In patients presenting with frequent PVCs, the site of origin was identified at the papillary muscles (50%) and/or the basal septal regions (67%). After a median follow-up of 4 years (range 1–11) and a mean of 1.8 6 1.1 procedures, ven-tricular arrhythmias recurred in only 1 patient (11%), and significant improvement in LV function occurred in 50% of cases. 5.5.2. Treatment According to the ACC/AHA guidelines on device-based therapy for cardiac rhythm abnormalities,4 there are sufficient observational data to indicate that ICD placement as a strat-egy to reduce the risk of sudden death can be a reasonable clinical strategy for primary prevention for patients with LVNC.4 ICD implantation should follow the general guide-lines for primary and secondary prevention.4 Patients with LVNC who have a moderate reduction in LV systolic func-tion are more likely to have a primary prevention indication for ICD placement. Gleva et al evaluated 661 adults with LVNC, a mean age of 46.4 6 14.9 years (55% male, 45% fe-male), 2/3 having HF (30% class III/IV) with a mean LVEF of 33.4% 6 15.5%. Atrial fibrillation/flutter occurred in 21% of patients, while 67% had nonsustained VT, and 30% had VT or prior VT arrest (5%).515 In 78% of patients, an ICD was placed as primary prevention while 20% required an ICD for secondary prevention. e352 Heart Rhythm, Vol 16, No 11, November 2019 COR LOE Recommendations References I B-NR If the proband has a disease-causing gene variant, it is recommended that first-degree relatives of individuals with LVNC undergo clinical screening for the disease along with genetic counseling and genetic testing. 349,505,514 IIa B-NR In individuals with the clinical diagnosis of pathologic LVNC, genetic counseling and genetic testing are reasonable for diagnosis and for gene-specific targeted cascade family screening. 349,505 LVNC is an autosomal dominant inherited disorder, which therefore has a 50% chance of being passed on from gene carriers to offspring or first-degree relatives. Genetic testing for individuals with LVNC could identify the causative gene and then allow for gene-specific targeted cascade family screening as a prevention measure that identifies at-risk family members. Variants in approximately 15 genes have been implicated as causative of noncompaction cardiomyopathy and include genes encoding desmosomal (desmoplakin and plakophilin 2), cytoskeletal, sarcomeric (most common), and ion channel proteins. In addition, disrupted mitochondrial function and metabolic abnormalities have a causal role.353,354,505–508 In a study of 194 relatives of 50 unrelated LVNC probands,505 64% showed familial cardiomyopathy that also included HCM and DCM. Due to the substantial overlap of LVNC with other forms of cardiomyopathy, genetic testing panels should encompass genes in which variants are associated with these other forms of cardiomyopathy. Among 17 asymptomatic relatives, 8 carriers had nonpenetrance. In a study of 128 pediatric patients with LVNC,349 75 of whom underwent genetic testing, the yield was 9%. Furthermore, patients with isolated LVNC were less likely to have a genetic test. Given the genetic heterogeneity and variable presentation and penetrance of LVNC, family members need a comprehensive approach that includes clinical screening and genetic counseling and testing. COR LOE Recommendations References I B-NR ICD implantation is recommended in individuals with LVNC and evidence of ventricular tachyarrhythmias associated with syncope or resuscitated sudden death if meaningful survival greater than 1 year is expected. 509 IIa B-NR ICD implantation is reasonable in individuals with LVNC and evidence of nonsustained VT associated with a reduced ejection fraction. 509,510 Patients with LVNC with evidence of VT associated with syncope or resuscitated sudden death are at high risk. In a cohort of 44 prospectively analyzed patients with LVNC 509 who were implanted with an ICD for either secondary (n 5 12 for VF or sustained VT) or primary (n 5 32, for HF with severe LV dysfunction) prevention, 8 patients (4 implanted with an ICD for primary prevention and 4 implanted for secondary prevention) received appropriate ICD therapies in a median follow-up time of 6.1 months. Inappropriate ICD therapies occurred in 6 patients implanted with an ICD for primary prevention and in 3 patients implanted for secondary prevention. Complications with ICD implantation can occur regardless of the underlying etiology but are infrequent (estimated at less than 2% in a registry of patients that included those with LVNC).515 Among primary prevention patients, those who are at higher risk for adverse arrhythmic outcomes are associated with LV dysfunction. In a cohort of 242 pediatric patients with isolated LVNC,510 15 experienced SCD, 15 of whom had abnormal cardiac dimensions or ventricular function, whereas those children with normal function and dimensions were at low risk for sudden death. Of 42 patients with VT, 5 had presented with resuscitated SCD; the mortality risk was also increased for 80 children with an arrhythmia (HR: 2.8; P 5 .002). COR LOE Recommendations References I B-NR Anticoagulation is recommended in individuals with LVNC with atrial fibrillation and in those with previous embolic events. 516 IIb B-NR Anticoagulation may be reasonable in individuals with LVNC with evidence of ventricular dysfunction. 516 LVNC has an increased risk of thromboembolism when associated with atrial fibrillation or in individuals with prior embolism. Thrombus formation may occur in the intertrabecular recesses of the LV, leading to the possibility of ejection to the coronary arteries, causing ischemia, or to the brain, resulting in a stroke. In a cohort of 144 patients with LVNC,516 stroke or peripheral embolism occurred in 22 patients, with 14 identified as due to a cardioembolic cause. A cardioembolic cause for stroke was related to either the presence of atrial fibrillation or systolic dysfunction. This further strengthens the indications for anticoagulation based upon well-established studies of stroke risk in patients with atrial fibrillation.517 In pediatric patients aspirin is often used. Towbin et al Evaluation, Risk Stratification, and Management of ACM e353 Section 6 Future directions and research recommendations In the future, a variety of new approaches to the understand-ing of mechanisms responsible for the development and pro-gression of ACMs will be a key focus. With this knowledge, novel treatment options based on targeting members of final common pathways at the gene and protein level can poten-tially be designed and tested. Gene editing could also provide novel options, as could regeneration medicine. To achieve these goals, research must focus on the array of disorders categorized under the umbrella of ACMs. Potential topics for study would include the following: 1. Mechanisms of desmosome/ID disruption and cell–cell pulling apart. 2. Mechanisms by which exercise results in early-onset and increased severity of ACM. 3. Mechanisms responsible for generating arrhythmias. 4. Nondesmosomal causes of ACM. COR LOE Recommendations References IIb B-NR In individuals with suspected LVNC, the diagnostic criteria by echocardiography or CMR, measured as the maximal ratio of noncompaction to compaction (NC/C), may be reasonable for establishing a diagnosis. 375,511,515, 518,519 IIb B-NR In individuals with suspected LVNC and ventricular arrhythmias, CMR or other advanced cardiac imaging may be reasonable for establishing a diagnosis and for risk stratification. 511,519,520 The maximum noncompaction to compaction ratio (NC/C) in the LV has been employed as a diagnostic criterion with mixed results, and its relationship with outcomes is uncertain. In an analysis of 700 patients referred for CMR,519 imaging criteria for LVNC were analyzed based on the ratio of noncompacted to compacted myocardium or trabeculation mass. The authors found a wide range for the apparent prevalence of LVNC according to the imaging criteria used and, furthermore, that the clinical outcome of death, ischemic stroke, VT, VF, or HF hospitalization was not related to the presence or absence of LVNC by any of the criteria. In a study of 199 patients with LV systolic dysfunction compared with healthy controls,375 echocardiographic criteria for LVNC, including the ratio of noncompacted to compacted myocardium, were observed in 23.6% of the patients, with 5 control patients (4 of whom were black) meeting the echocardiographic criteria for LVNC despite having no history of cardiovascular disease. These findings raise into question the specificity of echocardiographic criteria to diagnose LVNC and suggest that trabeculation is the result of increased circulatory volume. This is further supported by a study of pregnant patients, which found that trabeculations are commonly observed during pregnancy, a time of increased LV loading conditions, and that trabeculations regress postpartum.518 For patients with suspected LVNC and ventricular arrhythmias, CMR or other advanced cardiac imaging can help establish a diagnosis and assist in risk stratification due to better visualization of areas of hypertrabeculation. In a study by Sidhu et al,520 8 patients with LVNC diagnosed by other methods (clinical, echocardiogram, and conventional magnetic resonance imaging [MRI]) underwent cardiac CT using a 17-segment model. Other patient groups studied included those with nonischemic DCM, severe aortic stenosis, severe aortic regurgitation, HCM, and LV hypertrophy due to hypertension and a control group of 20 patients without cardiovascular disease. The authors found that a ratio of noncompacted to compacted myocardium .2.3 distinguished LVNC, with a sensitivity of 88% and specificity of 97%. In a study of 113 patients511 with LVNC determined by echocardiography who underwent CMR, all demonstrated a ratio of noncompacted to compacted myocardium of at least 2.3 in diastole. Additional CMR criteria were analyzed, including LV dilation, LGE, and percentage of noncompacted myocardial mass (the ratio of noncompacted to compacted mass exceeding 3:1 or 2:1 based upon the segment that was analyzed). Patients were followed for cardiac events for a mean period of 48 6 24 months. LV dilation, systolic dysfunction, and fibrosis were found to be predictors of cardiac events but not the indices related to noncompacted myocardium. The use of advanced cardiac imaging in patients suspected of LVNC can help establish the diagnosis and possibly provide risk stratification. The data published in the Multi-Ethnic Study of Atherosclerosis suggest that, using CMR, a ratio of trabeculated to compact myocardium of .2.3 is common in a large population-based cohort (43% had a ratio .2.3 in at least 1 region). Only 6% of participants in the study had a maximum ratio .2.3 in more than 2 regions in an older age population (mean age of 68 years).502,521 See Table 5 for diagnostic criteria for LVNC. Table 5 Diagnostic criteria for left ventricular noncompaction (LVNC) References Modality N LVNC diagnostic criteria 522 Echo 8 2 layers, excessively prominent ventricular trabeculations, progressively increased total myocardial wall thickness from mitral valve and toward the apex, CM/(NCM 1 CM) 0.5 at end-diastole (short-axis parasternal and/or apical views) 523 Echo 34 2 layers, intertrabecular recesses by CFD, no co-existing structural abnormality, NC/C layer 2 373 Echo 62 .3 trabeculations protruding from LV wall apically to papillary muscle. End-diastolic NC/C layer 2 497 MRI 7 2 layers. End-diastolic NC/C .2.3 524 MRI 16 Total LV trabeculated mass without papillary muscles. End-diastolic NC layer volume .20% C 5 compaction; CM 5 compacted myocardium; echo 5 echocardiogram; LV 5 left ventricular; MRI 5 magnetic resonance imaging; NC/C 5 maximum non-compaction to compaction ratio; NCM 5 noncompacted myocardium. e354 Heart Rhythm, Vol 16, No 11, November 2019 5. Utility of genetic testing in ACM prognosis. 6. Differences between right- and left-sided disease out-comes. 7. Medical therapy approaches. 8. Arrhythmia management approaches. 9. Gene editing and regenerative medicine; scientific methods and studies in animals and humans. Appendix. Supplementary Data Supplementary data (Appendix 3) and interview video associated with this article can be found in the online version at References 1. Indik JH, Patton KK, Beardsall M, et al. HRS clinical document development methodology manual and policies: executive summary. Heart Rhythm 2017; 14:e495–e500. 2. Halperin JL, Levine GN, Al-Khatib SM, et al. Further evolution of the ACC/AHA clinical practice guideline recommendation classification system: a report of the American College of Cardiology/American Heart Association Task Force on Clinical Practice Guidelines. Circulation 2016;133:1426–1428. 3. Al-Khatib SM, Stevenson WG, Ackerman MJ, et al. 2017 AHA/ACC/HRS guideline for management of patients with ventricular arrhythmias and the pre-vention of sudden cardiac death. Heart Rhythm 2018;15:e73–e189. 4. Epstein AE, DiMarco JP, Ellenbogen KA, et al. ACC/AHA/HRS 2008 guidelines for device-based therapy of cardiac rhythm abnormalities. Heart Rhythm 2008; 5:e1–e62. 5. Ackerman MJ, Priori SG, Willems S, et al. HRS/EHRA expert consensus state-ment on the state of genetic testing for the channelopathies and cardiomyopathies. Heart Rhythm 2011;8:1308–1339. 6. Priori SG, Wilde AA, Horie M, et al. HRS/EHRA/APHRS expert consensus state-ment on the diagnosis and management of patients with inherited primary arrhythmia syndromes. Heart Rhythm 2013;10:1932–1963. 7. Yancy CW, Jessup M, Bozkurt B, et al. 2016 ACC/AHA/HFSA focused update on new pharmacological therapy for heart failure: an update of the 2013 ACCF/ AHA guideline for the management of heart failure: a report of the American Col-lege of Cardiology/American Heart Association Task Force on Clinical Practice Guidelines and the Heart Failure Society of America. J Am Coll Cardiol 2016; 68:1476–1488. 8. Yancy CW, Jessup M, Bozkurt B, et al. 2013 ACCF/AHA guideline for the man-agement of heart failure: a report of the American College of Cardiology Founda-tion/American Heart Association Task Force on Practice Guidelines. J Am Coll Cardiol 2013;62:e147–e239. 9. Ponikowski P, Voors AA, Anker SD, et al. 2016 ESC guidelines for the diagnosis and treatment of acute and chronic heart failure: the Task Force for the diagnosis and treatment of acute and chronic heart failure of the European Society of Car-diology (ESC). Developed with the special contribution of the Heart Failure As-sociation (HFA) of the ESC. Eur Heart J 2016;37:2129–2200. 10. Marcus FI, McKenna WJ, Sherrill D, et al. Diagnosis of arrhythmogenic right ventricular cardiomyopathy/dysplasia: proposed modification of the task force criteria. Circulation 2010;121:1533–1541. 11. Hershberger RE, Givertz MM, Ho CY, et al. Genetic evaluation of cardiomyopathy—a Heart Failure Society of America Practice Guideline. J Card Fail 2018;24:281–302. 12. Corrado D, Wichter T, Link MS, et al. Treatment of arrhythmogenic right ventric-ular cardiomyopathy/dysplasia: an international task force consensus statement. Circulation 2015;132:441–453. 13. McKenna WJ, Stewart JT, Nihoyannopoulos P, McGinty F, Davies MJ. Hypertro-phic cardiomyopathy without hypertrophy: two families with myocardial disarray in the absence of increased myocardial mass. Br Heart J 1990;63:287–290. 14. Watkins H, Ashrafian H, Redwood C. Inherited cardiomyopathies. N Engl J Med 2011;364:1643–1656. 15. Mogensen J, Kubo T, Duque M, et al. Idiopathic restrictive cardiomyopathy is part of the clinical expression of cardiac troponin I mutations. J Clin Invest 2003;111:209–216. 16. Dalla Volta S, Battaglia G, Zerbini E. “Auricularization” of right ventricular pres-sure curve. Am Heart J 1961;61:25–33. 17. Marcus FI, Fontaine GH, Guiraudon G, et al. Right ventricular dysplasia: a report of 24 adult cases. Circulation 1982;65:384–398. 18. Rowland E, McKenna WJ, Sugrue D, Barclay R, Foale RA, Krikler DM. Ventric-ular tachycardia of left bundle branch block configuration in patients with isolated right ventricular dilatation. Clinical and electrophysiological features. Br Heart J 1984;51:15–24. 19. McKenna WJ, Thiene G, Nava A, et al. Diagnosis of arrhythmogenic right ven-tricular dysplasia/cardiomyopathy. Task Force of the Working Group Myocardial and Pericardial Disease of the European Society of Cardiology and of the Scientific Council on Cardiomyopathies of the International Society and Federa-tion of Cardiology. Br Heart J 1994;71:215–218. 20. Coonar AS, Protonotarios N, Tsatsopoulou A, et al. Gene for arrhythmogenic right ventricular cardiomyopathy with diffuse nonepidermolytic palmoplantar keratoderma and woolly hair (Naxos disease) maps to 17q21. Circulation 1998; 97:2049–2058. 21. Protonotarios A, Anastasakis A, Panagiotakos DB, et al. Arrhythmic risk assess-ment in genotyped families with arrhythmogenic right ventricular cardiomyopa-thy. Europace 2016;18:610–616. 22. McKoy G, Protonotarios N, Crosby A, et al. Identification of a deletion in plako-globin in arrhythmogenic right ventricular cardiomyopathy with palmoplantar keratoderma and woolly hair (Naxos disease). Lancet 2000;355:2119–2124. 23. Norgett EE, Hatsell SJ, Carvajal-Huerta L, et al. Recessive mutation in desmoplakin disrupts desmoplakin-intermediate filament interactions and causes dilated cardio-myopathy, woolly hair and keratoderma. Hum Mol Genet 2000;9:2761–2766. 24. Gerull B, Heuser A, Wichter T, et al. Mutations in the desmosomal protein plakophilin-2 are common in arrhythmogenic right ventricular cardiomyopathy. Nat Genet 2004;36:1162–1164. 25. Syrris P, Ward D, Asimaki A, et al. Desmoglein-2 mutations in arrhythmogenic right ventricular cardiomyopathy: a genotype-phenotype characterization of fa-milial disease. Eur Heart J 2007;28:581–588. 26. Syrris P, Ward D, Evans A, et al. Arrhythmogenic right ventricular dysplasia/car-diomyopathy associated with mutations in the desmosomal gene desmocollin-2. Am J Hum Genet 2006;79:978–984. 27. Vatta M, Marcus F, Towbin JA. Arrhythmogenic right ventricular cardiomyopa-thy: a ‘final common pathway’ that defines clinical phenotype. Eur Heart J 2007; 28:529–530. 28. Towbin JA, Lorts A. Arrhythmias and dilated cardiomyopathy common pathoge-netic pathways? J Am Coll Cardiol 2011;57:2169–2171. 29. Towbin JA. Desmosomal gene variants in patients with “possible ARVC.” Heart Rhythm 2011;8:719–720. 30. Sen-Chowdhry S, Prasad SK, Syrris P, et al. Cardiovascular magnetic resonance in arrhythmogenic right ventricular cardiomyopathy revisited: comparison with task force criteria and genotype. J Am Coll Cardiol 2006;48:2132–2140. 31. Norman M, Simpson M, Mogensen J, et al. Novel mutation in desmoplakin causes arrhythmogenic left ventricular cardiomyopathy. Circulation 2005; 112:636–642. 32. Kumar S, Baldinger SH, Gandjbakhch E, et al. Long-term arrhythmic and nonar-rhythmic outcomes of lamin A/C mutation carriers. J Am Coll Cardiol 2016; 68:2299–2307. 33. van der Zwaag PA, van Rijsingen IA, Asimaki A, et al. Phospholamban R14del mutation in patients diagnosed with dilated cardiomyopathy or arrhythmogenic right ventricular cardiomyopathy: evidence supporting the concept of arrhythmo-genic cardiomyopathy. Eur J Heart Fail 2012;14:1199–1207. 34. Ortiz-Genga MF, Cuenca S, Dal Ferro M, et al. Truncating FLNC mutations are associated with high-risk dilated and arrhythmogenic cardiomyopathies. J Am Coll Cardiol 2016;68:2440–2451. 35. Bowles NE, Bowles KR, Towbin JA. The "final common pathway" hypothesis and inherited cardiovascular disease. The role of cytoskeletal proteins in dilated cardiomyopathy. Herz 2000;25:168–175. 36. Towbin JA. The role of cytoskeletal proteins in cardiomyopathies. Curr Opin Cell Biol 1998;10:131–139. 37. Towbin JA, Bowles KR, Bowles NE. Etiologies of cardiomyopathy and heart fail-ure. Nat Med 1999;5:266–267. 38. Hoshijima M. Mechanical stress-strain sensors embedded in cardiac cytoskeleton: Z disk, titin, and associated structures. Am J Physiology Heart Circ Physiol 2006; 290:H1313–H1325. 39. Corrado D, Link MS, Calkins H. Arrhythmogenic right ventricular cardiomyop-athy. N Engl J Med 2017;376:1489–1490. 40. Niroomand F, Carbucicchio C, Tondo C, et al. Electrophysiological characteris-tics and outcome in patients with idiopathic right ventricular arrhythmia compared with arrhythmogenic right ventricular dysplasia. Heart 2002;87:41–47. 41. Towbin JA. Arrhythmogenic right ventricular cardiomyopathy: a paradigm of overlapping disorders. Ann Noninvasive Electrocardiol 2008; 13:325–326. Towbin et al Evaluation, Risk Stratification, and Management of ACM e355 42. Wilmot I, Morales DL, Price JF, et al. Effectiveness of mechanical circulatory support in children with acute fulminant and persistent myocarditis. J Card Fail 2011;17:487–494. 43. Pedersen CT, Kay GN, Kalman J, et al. EHRA/HRS/APHRS expert consensus on ventricular arrhythmias. Heart Rhythm 2014;11:e166–e196. 44. Steinmetz M, Krause U, Lauerer P, et al. Diagnosing ARVC in pediatric patients applying the revised task force criteria: importance of imaging, 12-Lead ECG, and genetics. Pediatr Cardiol 2018;39:1156–1164. 45. Deshpande SR, Herman HK, Quigley PC, et al. Arrhythmogenic right ventricular cardiomyopathy/dysplasia (ARVC/D): review of 16 pediatric cases and a pro-posal of modified pediatric criteria. Pediatr Cardiol 2016;37:646–655. 46. Chatterjee D, Fatah M, Akdis D, et al. An autoantibody identifies arrhythmogenic right ventricular cardiomyopathy and participates in its pathogenesis. Eur Heart J 2018;39:3932–3944. 47. Calkins H. A new diagnostic test for arrhythmogenic right ventricular cardiomy-opathy: is this too good to be true? Eur Heart J 2018;39:3945–3946. 48. Cox MG, van der Smagt JJ, Noorman M, et al. Arrhythmogenic right ventricular dysplasia/cardiomyopathy diagnostic task force criteria: impact of new task force criteria. Circ Arrhythm Electrophysiol 2010;3:126–133. 49. Jaoude SA, Leclercq JF, Coumel P. Progressive ECG changes in arrhythmogenic right ventricular disease. Evidence for an evolving disease. Eur Heart J 1996; 17:1717–1722. 50. Nava A, Bauce B, Basso C, et al. Clinical profile and long-term follow-up of 37 families with arrhythmogenic right ventricular cardiomyopathy. J Am Coll Car-diol 2000;36:2226–2233. 51. Te Riele AS, James CA, Bhonsale A, et al. Malignant arrhythmogenic right ven-tricular dysplasia/cardiomyopathy with a normal 12-lead electrocardiogram: a rare but underrecognized clinical entity. Heart Rhythm 2013;10:1484–1491. 52. Mast TP, James CA, Calkins H, et al. Evaluation of structural progression in ar-rhythmogenic right ventricular dysplasia/cardiomyopathy. JAMA Cardiol 2017; 2:293–302. 53. Piccini JP, Nasir K, Bomma C, et al. Electrocardiographic findings over time in arrhythmogenic right ventricular dysplasia/cardiomyopathy. Am J Cardiol 2005;96:122–126. 54. Quarta G, Ward D, Tome Esteban MT, et al. Dynamic electrocardiographic changes in patients with arrhythmogenic right ventricular cardiomyopathy. Heart 2010;96:516–522. 55. Cox MG, Nelen MR, Wilde AA, et al. Activation delay and VT parameters in ar-rhythmogenic right ventricular dysplasia/cardiomyopathy: toward improvement of diagnostic ECG criteria. J Cardiovasc Electrophysiol 2008;19:775–781. 56. Marcus FI, Zareba W. The electrocardiogram in right ventricular cardiomyopathy/ dysplasia. How can the electrocardiogram assist in understanding the pathologic and functional changes of the heart in this disease? J Electrocardiol 2009; 42:136.e1–5. 57. Peters S, Trummel M. Diagnosis of arrhythmogenic right ventricular dysplasia-cardiomyopathy: value of standard ECG revisited. Ann Noninvasive Electrocar-diol 2003;8:238–245. 58. Lohrmann GM, Peters F, Srivathsan K, Essop MR, Mookadam F. Electrocardio-graphic abnormalities in disease-free black South Africans and correlations with echocardiographic indexes and early repolarization. Am J Cardiol 2016; 118:765–770. 59. Malhotra A, Dhutia H, Gati S, et al. Anterior T-wave inversion in young white athletes and nonathletes: prevalence and significance. J Am Coll Cardiol 2017; 69:1–9. 60. Marcus FI. Prevalence of T-wave inversion beyond V1 in young normal individ-uals and usefulness for the diagnosis of arrhythmogenic right ventricular cardio-myopathy/dysplasia. Am J Cardiol 2005;95:1070–1071. 61. Jain R, Dalal D, Daly A, et al. Electrocardiographic features of arrhythmogenic right ventricular dysplasia. Circulation 2009;120:477–487. 62. Platonov PG, Calkins H, Hauer RN, et al. High interobserver variability in the assessment of epsilon waves: implications for diagnosis of arrhythmogenic right ventricular cardiomyopathy/dysplasia. Heart Rhythm 2016;13:208–216. 63. Tanawuttiwat T, Te Riele AS, Philips B, et al. Electroanatomic correlates of de-polarization abnormalities in arrhythmogenic right ventricular dysplasia/cardio-myopathy. J Cardiovasc Electrophysiol 2016;27:443–452. 64. Marcus FI. Epsilon waves aid in the prognosis and risk stratification of patients with ARVC/D. J Cardiovasc Electrophysiol 2015;26:1211–1212. 65. Protonotarios A, Anastasakis A, Tsatsopoulou A, et al. Clinical significance of epsilon waves in arrhythmogenic cardiomyopathy. J Cardiovasc Electrophysiol 2015;26:1204–1210. 66. Cox MG, van der Smagt JJ, Wilde AA, et al. New ECG criteria in arrhythmogenic right ventricular dysplasia/cardiomyopathy. Circ Arrhythm Electrophysiol 2009; 2:524–530. 67. Nasir K, Bomma C, Tandri H, et al. Electrocardiographic features of arrhythmo-genic right ventricular dysplasia/cardiomyopathy according to disease severity: a need to broaden diagnostic criteria. Circulation 2004;110:1527–1534. 68. Cox MG, van der Zwaag PA, van der Werf C, et al. Arrhythmogenic right ventric-ular dysplasia/cardiomyopathy: pathogenic desmosome mutations in index-patients predict outcome of family screening: Dutch arrhythmogenic right ventric-ular dysplasia/cardiomyopathy genotype-phenotype follow-up study. Circulation 2011;123:2690–2700. 69. Nunes de Alencar Neto J, Baranchuk A, Bayes-Genis A, Bayes de Luna A. Arrhyth-mogenic right ventricular dysplasia/cardiomyopathy: an electrocardiogram-based review. Europace 2018;20:f3–f12. 70. Nery PB, Beanlands RS, Nair GM, et al. Atrioventricular block as the initial mani-festation of cardiac sarcoidosis in middle-aged adults. J Cardiovasc Electrophy-siol 2014;25:875–881. 71. Andrade JP, Marin Neto JA, Paola AA, et al. I Latin American guidelines for the diagnosis and treatment of Chagas’ heart disease: executive summary. Arq Bras Cardiol 2011;96:434–442. 72. Bastiaenen R, Pantazis A, Gonna H, et al. The ventricular ectopic QRS interval (VEQSI): diagnosis of arrhythmogenic right ventricular cardiomyopathy in pa-tients with incomplete disease expression. Heart Rhythm 2016;13:1504–1512. 73. Camm CF, Tichnell C, James CA, et al. Premature ventricular contraction vari-ability in arrhythmogenic right ventricular dysplasia/cardiomyopathy. J Cardio-vasc Electrophysiol 2015;26:53–57. 74. Kamath GS, Zareba W, Delaney J, et al. Value of the signal-averaged electrocar-diogram in arrhythmogenic right ventricular cardiomyopathy/dysplasia. Heart Rhythm 2011;8:256–262. 75. Bauce B, Rampazzo A, Basso C, et al. Clinical phenotype and diagnosis of ar-rhythmogenic right ventricular cardiomyopathy in pediatric patients carrying desmosomal gene mutations. Heart Rhythm 2011;8:1686–1695. 76. Manyari DE, Duff HJ, Kostuk WJ, et al. Usefulness of noninvasive studies for diagnosis of right ventricular dysplasia. Am J Cardiol 1986;57:1147–1153. 77. Reant P, Hauer AD, Castelletti S, et al. Epicardial myocardial strain abnormalities may identify the earliest stages of arrhythmogenic cardiomyopathy. Int J Cardio-vasc Imaging 2016;32:593–601. 78. Haugaa KH, Basso C, Badano LP, et al. Comprehensive multi-modality imaging approach in arrhythmogenic cardiomyopathy-an expert consensus document of the European Association of Cardiovascular Imaging. Eur Heart J Cardiovasc Im-aging 2017;18:237–253. 79. Kaplan SR, Gard JJ, Protonotarios N, et al. Remodeling of myocyte gap junctions in arrhythmogenic right ventricular cardiomyopathy due to a deletion in plakoglo-bin (Naxos disease). Heart Rhythm 2004;1:3–11. 80. Sen-Chowdhry S, Syrris P, Ward D, Asimaki A, Sevdalis E, McKenna WJ. Clin-ical and genetic characterization of families with arrhythmogenic right ventricular dysplasia/cardiomyopathy provides novel insights into patterns of disease expres-sion. Circulation 2007;115:1710–1720. 81. Blusztein DI, Zentner D, Thompson T, et al. Arrhythmogenic right ventricular cardiomyopathy: a review of living and deceased probands. Heart Lung Circ 2019;28:1034–1041. 82. Corrado D, Calkins H, Link MS, et al. Prophylactic implantable defibrillator in patients with arrhythmogenic right ventricular cardiomyopathy/dysplasia and no prior ventricular fibrillation or sustained ventricular tachycardia. Circulation 2010;122:1144–1152. 83. Corrado D, Leoni L, Link MS, et al. Implantable cardioverter-defibrillator therapy for prevention of sudden death in patients with arrhythmogenic right ventricular cardiomyopathy/dysplasia. Circulation 2003;108:3084–3091. 84. Denis A, Sacher F, Derval N, et al. Diagnostic value of isoproterenol testing in arrhythmogenic right ventricular cardiomyopathy. Circ Arrhythm Electrophysiol 2014;7:590–597. 85. Angelini A, Basso C, Nava A, Thiene G. Endomyocardial biopsy in arrhythmo-genic right ventricular cardiomyopathy. Am Heart J 1996;132:203–206. 86. Basso C, Ronco F, Marcus F, et al. Quantitative assessment of endomyocardial biopsy in arrhythmogenic right ventricular cardiomyopathy/dysplasia: an in vitro validation of diagnostic criteria. Eur Heart J 2008;29:2760–2771. 87. Avella A, d’Amati G, Pappalardo A, et al. Diagnostic value of endomyocardial biopsy guided by electroanatomic voltage mapping in arrhythmogenic right ven-tricular cardiomyopathy/dysplasia. J Cardiovasc Electrophysiol 2008; 19:1127–1134. 88. Paul M, Stypmann J, Gerss J, et al. Safety of endomyocardial biopsy in patients with arrhythmogenic right ventricular cardiomyopathy: a study analyzing 161 diagnostic procedures. JACC Cardiovasc Interv 2011;4:1142–1148. 89. Ermakov S, Ursell PC, Johnson CJ, et al. Plakoglobin immunolocalization as a diagnostic test for arrhythmogenic right ventricular cardiomyopathy. Pacing Clin Electrophysiol 2014;37:1708–1716. e356 Heart Rhythm, Vol 16, No 11, November 2019 90. Munkholm J, Christensen AH, Svendsen JH, Andersen CB. Usefulness of im-munostaining for plakoglobin as a diagnostic marker of arrhythmogenic right ventricular cardiomyopathy. Am J Cardiol 2012;109:272–275. 91. Asimaki A, Tandri H, Huang H, et al. A new diagnostic test for arrhythmogenic right ventricular cardiomyopathy. N Engl J Med 2009;360:1075–1084. 92. Xu T, Yang Z, Vatta M, et al. Compound and digenic heterozygosity contributes to arrhythmogenic right ventricular cardiomyopathy. J Am Coll Cardiol 2010; 55:587–597. 93. Sikkema-Raddatz B, Johansson LF, de Boer EN, et al. Targeted next-generation sequencing can replace Sanger sequencing in clinical diagnostics. Hum Mutat 2013;34:1035–1042. 94. Kapplinger JD, Landstrom AP, Salisbury BA, et al. Distinguishing arrhythmo-genic right ventricular cardiomyopathy/dysplasia-associated mutations from background genetic noise. J Am Coll Cardiol 2011;57:2317–2327. 95. Richards S, Aziz N, Bale S, et al. Standards and guidelines for the interpretation of sequence variants: a joint consensus recommendation of the American Col-lege of Medical Genetics and Genomics and the Association for Molecular Pa-thology. Genet Med 2015;17:405–424. 96. Plon SE, Eccles DM, Easton D, et al. Sequence variant classification and report-ing: recommendations for improving the interpretation of cancer susceptibility genetic test results. Hum Mutat 2008;29:1282–1291. 97. Van Driest SL, Wells QS, Stallings S, et al. Association of arrhythmia-related genetic variants with phenotypes documented in electronic medical records. JAMA 2016;315:47–57. 98. Amendola LM, Dorschner MO, Robertson PD, et al. Actionable exomic inci-dental findings in 6503 participants: challenges of variant classification. Genome Res 2015;25:305–315. 99. Amendola LM, Jarvik GP, Leo MC, et al. Performance of ACMG-AMP variant-interpretation guidelines among nine laboratories in the Clinical Sequencing Exploratory Research Consortium. Am J Hum Genet 2016;99:247. 100. Walsh R, Thomson KL, Ware JS, et al. Reassessment of Mendelian gene path-ogenicity using 7,855 cardiomyopathy cases and 60,706 reference samples. Genet Med 2017;19:192–203. 101. Manrai AK, Funke BH, Rehm HL, et al. Genetic misdiagnoses and the potential for health disparities. N Engl J Med 2016;375:655–665. 102. De Bortoli M, Beffagna G, Bauce B, et al. The p.A897KfsX4 frameshift varia-tion in desmocollin-2 is not a causative mutation in arrhythmogenic right ven-tricular cardiomyopathy. Eur J Hum Genet 2010;18:776–782. 103. Posch MG, Posch MJ, Perrot A, Dietz R, Ozcelik C. Variations in DSG2: V56M, V158G and V920G are not pathogenic for arrhythmogenic right ventricular dysplasia/cardiomyopathy. Nat Clin Pract Cardiovasc Med 2008;5:E1. 104. Christensen AH, Benn M, Tybjaerg-Hansen A, Haunso S, Svendsen JH. Missense variants in plakophilin-2 in arrhythmogenic right ventricular cardio-myopathy patients—disease-causing or innocent bystanders? Cardiology 2010;115:148–154. 105. Gandjbakhch E, Charron P, Fressart V, et al. Plakophilin 2A is the dominant iso-form in human heart tissue: consequences for the genetic screening of arrhyth-mogenic right ventricular cardiomyopathy. Heart 2011;97:844–849. 106. Andreasen C, Nielsen JB, Refsgaard L, et al. New population-based exome data are questioning the pathogenicity of previously cardiomyopathy-associated ge-netic variants. Eur J Hum Genet 2013;21:918–928. 107. Mogensen J, van Tintelen JP, Fokstuen S, et al. The current role of next-generation DNA sequencing in routine care of patients with hereditary cardio-vascular conditions: a viewpoint paper of the European Society of Cardiology working group on myocardial and pericardial diseases and members of the Eu-ropean Society of Human Genetics. Eur Heart J 2015;36:1367–1370. 108. Rehm HL, Berg JS, Brooks LD, et al. ClinGen—the Clinical Genome Resource. N Engl J Med 2015;372:2235–2242. 109. Kelly MA, Caleshu C, Morales A, et al. Adaptation and validation of the ACMG/ AMP variant classification framework for MYH7-associated inherited cardio-myopathies: recommendations by ClinGen’s Inherited Cardiomyopathy Expert Panel. Genet Med 2018;20:351–359. 110. Milko LV, Funke BH, Hershberger RE, et al. Development of Clinical Domain Working Groups for the Clinical Genome Resource (ClinGen): lessons learned and plans for the future. Genet Med 2019;21:987–993. 111. Ingles J, Goldstein J, Thaxton C, et al. Evaluating the clinical validity of hyper-trophic cardiomyopathy genes. Circ Genom Precis Med 2019;12:e002460. 112. Bienengraeber M, Olson TM, Selivanov VA, et al. ABCC9 mutations identified in human dilated cardiomyopathy disrupt catalytic KATP channel gating. Nat Genet 2004;36:382–387. 113. Rampazzo A, Nava A, Malacrida S, et al. Mutation in human desmoplakin domain binding to plakoglobin causes a dominant form of arrhythmogenic right ventricular cardiomyopathy. Am J Hum Genet 2002;71:1200–1206. 114. Taylor M, Graw S, Sinagra G, et al. Genetic variation in titin in arrhythmogenic right ventricular cardiomyopathy-overlap syndromes. Circulation 2011; 124:876–885. 115. van Hengel J, Calore M, Bauce B, et al. Mutations in the area composita protein alphaT-catenin are associated with arrhythmogenic right ventricular cardiomy-opathy. Eur Heart J 2013;34:201–210. 116. Murray B, Hoorntje ET, Te Riele A, et al. Identification of sarcomeric variants in probands with a clinical diagnosis of arrhythmogenic right ventricular cardiomy-opathy (ARVC). J Cardiovasc Electrophysiol 2018;29:1004–1009. 117. Medeiros-Domingo A, Saguner AM, Magyar I, et al. Arrhythmogenic right ven-tricular cardiomyopathy: implications of next-generation sequencing in appro-priate diagnosis. Europace 2017;19:1063–1069. 118. Mayosi BM, Fish M, Shaboodien G, et al. Identification of cadherin 2 (CDH2) mutations in arrhythmogenic right ventricular cardiomyopathy. Circ Cardiovasc Genet 2017;10:e001605. 119. Turkowski KL, Tester DJ, Bos JM, Haugaa KH, Ackerman MJ. Whole exome sequencing with genomic triangulation implicates CDH2-encoded N-cadherin as a novel pathogenic substrate for arrhythmogenic cardiomyopathy. Congenit Heart Dis 2017;12:226–235. 120. De Bortoli M, Postma AV, Poloni G, et al. Whole-exome sequencing identifies pathogenic variants in TJP1 gene associated with arrhythmogenic cardiomyop-athy. Circ Genom Precis Med 2018;11:e002123. 121. Norton N, Li D, Rieder MJ, et al. Genome-wide studies of copy number variation and exome sequencing identify rare variants in BAG3 as a cause of dilated car-diomyopathy. Am J Hum Genet 2011;88:273–282. 122. Hedberg C, Melberg A, Kuhl A, Jenne D, Oldfors A. Autosomal dominant myofibrillar myopathy with arrhythmogenic right ventricular cardiomyopathy 7 is caused by a DES mutation. Eur J Hum Genet 2012;20:984–985. 123. Awad MM, Dalal D, Cho E, et al. DSG2 mutations contribute to arrhythmogenic right ventricular dysplasia/cardiomyopathy. Am J Hum Genet 2006; 79:136–142. 124. Yang Z, Bowles NE, Scherer SE, et al. Desmosomal dysfunction due to muta-tions in desmoplakin causes arrhythmogenic right ventricular dysplasia/cardio-myopathy. Circ Res 2006;99:646–655. 125. Asimaki A, Syrris P, Wichter T, Matthias P, Saffitz JE, McKenna WJ. A novel dominant mutation in plakoglobin causes arrhythmogenic right ventricular car-diomyopathy. Am J Hum Genet 2007;81:964–973. 126. Vatta M, Mohapatra B, Jimenez S, et al. Mutations in Cypher/ZASP in patients with dilated cardiomyopathy and left ventricular non-compaction. J Am Coll Cardiol 2003;42:2014–2027. 127. Quarta G, Syrris P, Ashworth M, et al. Mutations in the Lamin A/C gene mimic arrhythmogenic right ventricular cardiomyopathy. Eur Heart J 2012; 33:1128–1136. 128. Pashmforoush M, Lu JT, Chen H, et al. Nkx2-5 pathways and congenital heart disease; loss of ventricular myocyte lineage specification leads to progressive cardiomyopathy and complete heart block. Cell 2004;117:373–386. 129. Schmitt JP, Kamisago M, Asahi M, et al. Dilated cardiomyopathy and heart fail-ure caused by a mutation in phospholamban. Science 2003;299:1410–1413. 130. Brauch KM, Karst ML, Herron KJ, et al. Mutations in ribonucleic acid binding protein gene cause familial dilated cardiomyopathy. J Am Coll Cardiol 2009; 54:930–941. 131. McNair WP, Ku L, Taylor MR, et al. SCN5A mutation associated with dilated cardiomyopathy, conduction disorder, and arrhythmia. Circulation 2004; 110:2163–2167. 132. Merner ND, Hodgkinson KA, Haywood AF, et al. Arrhythmogenic right ven-tricular cardiomyopathy type 5 is a fully penetrant, lethal arrhythmic disorder caused by a missense mutation in the TMEM43 gene. Am J Hum Genet 2008; 82:809–821. 133. Pilichou K, Lazzarini E, Rigato I, et al. Large genomic rearrangements of desmo-somal genes in italian arrhythmogenic cardiomyopathy patients. Circ Arrhythm Electrophysiol 2017;10:e005324. 134. RobertsJD,Herkert JC, RutbergJ, etal. Detection of genomic deletionsof PKP2 in arrhythmogenic right ventricular cardiomyopathy. Clin Genet 2013;83:452–456. 135. Judge DP, Johnson NM. Genetic evaluation of familial cardiomyopathy. J Car-diovasc Transl Res 2008;1:144–154. 136. Baudhuin LM, Leduc C, Train LJ, et al. Technical advances for the clinical genomic evaluation of sudden cardiac death: verification of next-generation sequencing panels for hereditary cardiovascular conditions using formalin-fixed paraffin-embedded tissues and dried blood spots. Circ Cardiovasc Genet 2017;10:e001884. 137. Carturan E, Tester DJ, Brost BC, Basso C, Thiene G, Ackerman MJ. Postmortem genetic testing for conventional autopsy-negative sudden unexplained death: an evaluation of different DNA extraction protocols and the feasibility of Towbin et al Evaluation, Risk Stratification, and Management of ACM e357 mutational analysis from archival paraffin-embedded heart tissue. Am J Clin Pathol 2008;129:391–397. 138. Bagnall RD, Weintraub RG, Ingles J, et al. A prospective study of sudden car-diac death among children and young adults. N Engl J Med 2016; 374:2441–2452. 139. Judge DP. Use of genetics in the clinical evaluation of cardiomyopathy. JAMA 2009;302:2471–2476. 140. Lopez-Ayala JM, Gomez-Milanes I, Sanchez Munoz JJ, et al. Desmoplakin trun-cations and arrhythmogenic left ventricular cardiomyopathy: characterizing a phenotype. Europace 2014;16:1838–1846. 141. Bhonsale A, Groeneweg JA, James CA, et al. Impact of genotype on clinical course in arrhythmogenic right ventricular dysplasia/cardiomyopathy-associated mutation carriers. Eur Heart J 2015;36:847–855. 142. Rigato I, Bauce B, Rampazzo A, et al. Compound and digenic heterozygosity predicts lifetime arrhythmic outcome and sudden cardiac death in desmosomal gene-related arrhythmogenic right ventricular cardiomyopathy. Circ Cardiovasc Genet 2013;6:533–542. 143. Fressart V, Duthoit G, Donal E, et al. Desmosomal gene analysis in arrhythmo-genic right ventricular dysplasia/cardiomyopathy: spectrum of mutations and clinical impact in practice. Europace 2010;12:861–868. 144. Bao J, Wang J, Yao Y, et al. Correlation of ventricular arrhythmias with geno-type in arrhythmogenic right ventricular cardiomyopathy. Circ Cardiovasc Genet 2013;6:552–556. 145. Groeneweg JA, Bhonsale A, James CA, et al. Clinical presentation, long-term follow-up, and outcomes of 1001 arrhythmogenic right ventricular dysplasia/ cardiomyopathy patients and family members. Circ Cardiovasc Genet 2015; 8:437–446. 146. Te Riele AS, James CA, Groeneweg JA, et al. Approach to family screening in arrhythmogenic right ventricular dysplasia/cardiomyopathy. Eur Heart J 2016; 37:755–763. 147. Quarta G, Muir A, Pantazis A, et al. Familial evaluation in arrhythmogenic right ventricular cardiomyopathy: impact of genetics and revised task force criteria. Circulation 2011;123:2701–2709. 148. Perrin MJ, Angaran P, Laksman Z, et al. Exercise testing in asymptomatic gene carriers exposes a latent electrical substrate of arrhythmogenic right ventricular cardiomyopathy. J Am Coll Cardiol 2013;62:1772–1779. 149. Pasotti M, Klersy C, Pilotto A, et al. Long-term outcome and risk stratification in dilated cardiolaminopathies. J Am Coll Cardiol 2008;52:1250–1260. 150. van Rijsingen IA, Nannenberg EA, Arbustini E, et al. Gender-specific differ-ences in major cardiac events and mortality in lamin A/C mutation carriers. Eur J Heart Fail 2013;15:376–384. 151. Forleo C, Carmosino M, Resta N, et al. Clinical and functional characterization of a novel mutation in lamin a/c gene in a multigenerational family with arrhyth-mogenic cardiac laminopathy. PLoS One 2015;10:e0121723. 152. Kato K, Takahashi N, Fujii Y, et al. LMNA cardiomyopathy detected in Japa-nese arrhythmogenic right ventricular cardiomyopathy cohort. J Cardiol 2016; 68:346–351. 153. Liang JJ, Grogan M, Ackerman MJ. LMNA-mediated arrhythmogenic right ven-tricular cardiomyopathy and Charcot-Marie-Tooth type 2B1: a patient-discovered unifying diagnosis. J Cardiovasc Electrophysiol 2016;27:868–871. 154. Valtuille L, Paterson I, Kim DH, Mullen J, Sergi C, Oudit GY. A case of lamin A/C mutation cardiomyopathy with overlap features of ARVC: a critical role of genetic testing. Int J Cardiol 2013;168:4325–4327. 155. Nishiuchi S, Makiyama T, Aiba T, et al. Gene-based risk stratification for cardiac disorders in LMNA mutation carriers. Circ Cardiovasc Genet 2017;10:e001603. 156. van Rijsingen IA, Arbustini E, Elliott PM, et al. Risk factors for malignant ven-tricular arrhythmias in lamin a/c mutation carriers a European cohort study. J Am Coll Cardiol 2012;59:493–500. 157. Meune C, Van Berlo JH, Anselme F, Bonne G, Pinto YM, Duboc D. Primary prevention of sudden death in patients with lamin A/C gene mutations. N Engl J Med 2006;354:209–210. 158. Lopez-Ayala JM, Ortiz-Genga M, Gomez-Milanes I, et al. A mutation in the Z-line Cypher/ZASP protein is associated with arrhythmogenic right ventricular cardiomyopathy. Clin Genet 2015;88:172–176. 159. Milting H, Klauke B, Christensen AH, et al. The TMEM43 Newfoundland mu-tation p.S358L causing ARVC-5 was imported from Europe and increases the stiffness of the cell nucleus. Eur Heart J 2015;36:872–881. 160. Hodgkinson KA, Connors SP, Merner N, et al. The natural history of a genetic subtype of arrhythmogenic right ventricular cardiomyopathy caused by a p.S358L mutation in TMEM43. Clin Genet 2013;83:321–331. 161. van Rijsingen IA, van der Zwaag PA, Groeneweg JA, et al. Outcome in phos-pholamban R14del carriers: results of a large multicentre cohort study. Circ Car-diovasc Genet 2014;7:455–465. 162. Kalia SS, Adelman K, Bale SJ, et al. Recommendations for reporting of second-ary findings in clinical exome and genome sequencing, 2016 update (ACMG SF v2.0): a policy statement of the American College of Medical Genetics and Ge-nomics. Genet Med 2017;19:249–255. 163. Haggerty CM, James CA, Calkins H, et al. Electronic health record phenotype in subjects with genetic variants associated with arrhythmogenic right ventricular cardiomyopathy: a study of 30,716 subjects with exome sequencing. Genet Med 2017;19:1245–1252. 164. Ashley EA, Hershberger RE, Caleshu C, et al. Genetics and cardiovascular dis-ease: a policy statement from the American Heart Association. Circulation 2012; 126:142–157. 165. Morales A, Cowan J, Dagua J, Hershberger RE. Family history: an essential tool for cardiovascular genetic medicine. Congest Heart Fail 2008;14:37–45. 166. Ingles J, Yeates L, Semsarian C. The emerging role of the cardiac genetic coun-selor. Heart Rhythm 2011;8:1958–1962. 167. Waddell-Smith KE, Donoghue T, Oates S, et al. Inpatient detection of cardiac-inherited disease: the impact of improving family history taking. Open Heart 2016;3:e000329. 168. Dunn KE, Caleshu C, Cirino AL, Ho CY, Ashley EA. A clinical approach to in-herited hypertrophy: the use of family history in diagnosis, risk assessment, and management. Circ Cardiovasc Genet 2013;6:118–131. 169. van Tintelen JP, Van Gelder IC, Asimaki A, et al. Severe cardiac phenotype with right ventricular predominance in a large cohort of patients with a single missense mutation in the DES gene. Heart Rhythm 2009;6:1574–1583. 170. Hasselberg NE, Haland TF, Saberniak J, et al. Lamin A/C cardiomyopathy: young onset, high penetrance, and frequent need for heart transplantation. Eur Heart J 2018;39:853–860. 171. Te Riele A, James CA, Sawant AC, et al. Arrhythmogenic right ventricular dysplasia/cardiomyopathy in the pediatric population: clinical characterization and comparison with adult-onset disease. JACC Clin Electrophysiol 2015; 1:551–560. 172. Dalal D, James C, Devanagondi R, et al. Penetrance of mutations in plakophilin-2 among families with arrhythmogenic right ventricular dysplasia/cardiomyop-athy. J Am Coll Cardiol 2006;48:1416–1424. 173. Hamid MS, Norman M, Quraishi A, et al. Prospective evaluation of relatives for familial arrhythmogenic right ventricular cardiomyopathy/dysplasia reveals a need to broaden diagnostic criteria. J Am Coll Cardiol 2002;40:1445–1450. 174. te Riele AS, James CA, Rastegar N, et al. Yield of serial evaluation in at-risk family members of patients with ARVD/C. J Am Coll Cardiol 2014; 64:293–301. 175. Mast TP, Teske AJ, Walmsley J, et al. Right ventricular imaging and computer simulation for electromechanical substrate characterization in arrhythmogenic right ventricular cardiomyopathy. J Am Coll Cardiol 2016;68:2185–2197. 176. Ackerman JP, Bartos DC, Kapplinger JD, Tester DJ, Delisle BP, Ackerman MJ. The promise and peril of precision medicine: phenotyping still matters most. Mayo Clin Proc 2016;91:1606–1616. 177. Cadrin-Tourigny J, Bosman LP, Nozza A, et al. A new prediction model for ven-tricular arrhythmias in arrhythmogenic right ventricular cardiomyopathy. Eur Heart J 2019;40:1850–1858. 178. Hulot JS, Jouven X, Empana JP, Frank R, Fontaine G. Natural history and risk stratification of arrhythmogenic right ventricular dysplasia/cardiomyopathy. Cir-culation 2004;110:1879–1884. 179. Link MS, Laidlaw D, Polonsky B, et al. Ventricular arrhythmias in the North American multidisciplinary study of ARVC: predictors, characteristics, and treatment. J Am Coll Cardiol 2014;64:119–125. 180. Orgeron GM, James CA, Te Riele A, et al. Implantable cardioverter-defibrillator therapy in arrhythmogenic right ventricular dysplasia/cardiomyopathy: predic-tors of appropriate therapy, outcomes, and complications. J Am Heart Assoc 2017;6:e006242. 181. Hodgkinson KA, Parfrey PS, Bassett AS, et al. The impact of implantable cardioverter-defibrillator therapy on survival in autosomal-dominant arrhythmo-genic right ventricular cardiomyopathy (ARVD5). J Am Coll Cardiol 2005; 45:400–408. 182. Bardy GH, Lee KL, Mark DB, et al. Amiodarone or an implantable cardioverter-defibrillator for congestive heart failure. N Engl J Med 2005;352:225–237. 183. Mazzanti A, Ng K, Faragli A, et al. Arrhythmogenic right ventricular cardiomy-opathy: clinical course and predictors of arrhythmic risk. J Am Coll Cardiol 2016;68:2540–2550. 184. Pinamonti B, Dragos AM, Pyxaras SA, et al. Prognostic predictors in arrhythmo-genic right ventricular cardiomyopathy: results from a 10-year registry. Eur Heart J 2011;32:1105–1113. 185. Bansch D, Antz M, Boczor S, et al. Primary prevention of sudden cardiac death in idiopathic dilated cardiomyopathy: the Cardiomyopathy Trial (CAT). Circu-lation 2002;105:1453–1458. 186. Desai AS, Fang JC, Maisel WH, Baughman KL. Implantable defibrillators for the prevention of mortality in patients with nonischemic cardiomyopathy: a meta-analysis of randomized controlled trials. JAMA 2004;292:2874–2879. e358 Heart Rhythm, Vol 16, No 11, November 2019 187. Kadish A, Dyer A, Daubert JP, et al. Prophylactic defibrillator implantation in patients with nonischemic dilated cardiomyopathy. N Engl J Med 2004; 350:2151–2158. 188. Strickberger SA, Hummel JD, Bartlett TG, et al. Amiodarone versus implantable cardioverter-defibrillator: randomized trial in patients with nonischemic dilated cardiomyopathy and asymptomatic nonsustained ventricular tachycardia— AMIOVIRT. J Am Coll Cardiol 2003;41:1707–1712. 189. Anselme F, Moubarak G, Savoure A, et al. Implantable cardioverter-defibrillators in lamin A/C mutation carriers with cardiac conduction disorders. Heart Rhythm 2013;10:1492–1498. 190. Kimura Y, Noda T, Otsuka Y, et al. Potentially lethal ventricular arrhythmias and heart failure in arrhythmogenic right ventricular cardiomyopathy: what are the differences between men and women? JACC Clin Electrophysiol 2016;2:546–555. 191. Bristow MR, Saxon LA, Boehmer J, et al. Cardiac-resynchronization therapy with or without an implantable defibrillator in advanced chronic heart failure. N Engl J Med 2004;350:2140–2150. 192. Zannad F, Gattis Stough W, Rossignol P, et al. Mineralocorticoid receptor antag-onists for heart failure with reduced ejection fraction: integrating evidence into clinical practice. Eur Heart J 2012;33:2782–2795. 193. McMurray JJ, Packer M, Desai AS, et al. Angiotensin-neprilysin inhibition versus enalapril in heart failure. N Engl J Med 2014;371:993–1004. 194. Swedberg K, Komajda M, Bohm M, et al. Ivabradine and outcomes in chronic heart failure (SHIFT): a randomised placebo-controlled study. Lancet 2010; 376:875–885. 195. Abdul-Rahim AH, Shen L, Rush CJ, Jhund PS, Lees KR, McMurray JJV. Effect of digoxin in patients with heart failure and mid-range (borderline) left ventric-ular ejection fraction. Eur J Heart Fail 2018;20:1139–1145. 196. Tracy CM, Epstein AE, Darbar D, et al. 2012 ACCF/AHA/HRS focused update of the 2008 guidelines for device-based therapy of cardiac rhythm abnormalities: a report of the American College of Cardiology Foundation/American Heart As-sociation Task Force on Practice Guidelines and the Heart Rhythm Society. 2012. [corrected]. Circulation 2012;126:1784–1800. 197. Fabritz L, Hoogendijk MG, Scicluna BP, et al. Load-reducing therapy prevents development of arrhythmogenic right ventricular cardiomyopathy in plakoglobin-deficient mice. J Am Coll Cardiol 2011;57:740–750. 198. Wlodarska EK, Wozniak O, Konka M, Rydlewska-Sadowska W, Biederman A, Hoffman P. Thromboembolic complications in patients with arrhythmogenic right ventricular dysplasia/cardiomyopathy. Europace 2006;8:596–600. 199. Homma S, Thompson JL, Pullicino PM, et al. Warfarin and aspirin in patients with heart failure and sinus rhythm. N Engl J Med 2012;366:1859–1869. 200. Lip GY, Ponikowski P, Andreotti F, et al. Thrombo-embolism and antithrom-botic therapy for heart failure in sinus rhythm. A joint consensus document from the ESC Heart Failure Association and the ESC Working Group on Throm-bosis. Eur J Heart Fail 2012;14:681–695. 201. January CT, Wann LS, Alpert JS, et al. 2014 AHA/ACC/HRS guideline for the management of patients with atrial fibrillation: a report of the American College of Cardiology/American Heart Association Task Force on Practice Guidelines and the Heart Rhythm Society. J Am Coll Cardiol 2014; 64:e1–76. 202. Kirchhof P, Benussi S, Kotecha D, et al. 2016 ESC Guidelines for the manage-ment of atrial fibrillation developed in collaboration with EACTS. Eur Heart J 2016;37:2893–2962. 203. Ruwald MH, Abu-Zeitone A, Jons C, et al. Impact of carvedilol and metoprolol on inappropriate implantable cardioverter-defibrillator therapy: the MADIT-CRT trial (Multicenter Automatic Defibrillator Implantation With Cardiac Re-synchronization Therapy). J Am Coll Cardiol 2013;62:1343–1350. 204. Moss AJ, Schuger C, Beck CA, et al. Reduction in inappropriate therapy and mortality through ICD programming. N Engl J Med 2012;367:2275–2283. 205. Gasparini M, Proclemer A, Klersy C, et al. Effect of long-detection interval vs standard-detection interval for implantable cardioverter-defibrillators on antita-chycardia pacing and shock delivery: the ADVANCE III randomized clinical trial. JAMA 2013;309:1903–1911. 206. Saeed M, Hanna I, Robotis D, et al. Programming implantable cardioverter-defibrillators in patients with primary prevention indication to prolong time to first shock: results from the PROVIDE study. J Cardiovasc Electrophysiol 2014;25:52–59. 207. Marcus GM, Glidden DV, Polonsky B, et al. Efficacy of antiarrhythmic drugs in arrhythmogenic right ventricular cardiomyopathy: a report from the North American ARVC Registry. J Am Coll Cardiol 2009;54:609–615. 208. Connolly SJ, Dorian P, Roberts RS, et al. Comparison of beta-blockers, amiodar-one plus beta-blockers, or sotalol for prevention of shocks from implantable car-dioverter defibrillators: the OPTIC Study: a randomized trial. JAMA 2006; 295:165–171. 209. Ermakov S, Gerstenfeld EP, Svetlichnaya Y, Scheinman MM. Use of flecainide in combination antiarrhythmic therapy in patients with arrhythmogenic right ventricular cardiomyopathy. Heart Rhythm 2017;14:564–569. 210. Kannankeril PJ, Moore JP, Cerrone M, et al. Efficacy of flecainide in the treat-ment of catecholaminergic polymorphic ventricular tachycardia: a randomized clinical trial. JAMA Cardiol 2017;2:759–766. 211. Salvage SC, Chandrasekharan KH, Jeevaratnam K, et al. Multiple targets for fle-cainide action: implications for cardiac arrhythmogenesis. Br J Pharmacol 2018; 175:1260–1278. 212. Tokuda M, Tedrow UB, Kojodjojo P, et al. Catheter ablation of ventricular tachycardia in nonischemic heart disease. Circ Arrhythm Electrophysiol 2012; 5:992–1000. 213. Sapp JL, Wells GA, Parkash R, et al. Ventricular tachycardia ablation versus escalation of antiarrhythmic drugs. N Engl J Med 2016;375:111–121. 214. Tung R, Vaseghi M, Frankel DS, et al. Freedom from recurrent ventricular tachycardia after catheter ablation is associated with improved survival in pa-tients with structural heart disease: an International VT Ablation Center Collab-orative Group study. Heart Rhythm 2015;12:1997–2007. 215. Tzou WS, Tung R, Frankel DS, et al. Outcomes after repeat ablation of ventric-ular tachycardia in structural heart disease: an analysis from the International VT Ablation Center Collaborative Group. Heart Rhythm 2017;14:991–997. 216. Santangeli P, Zado ES, Supple GE, et al. Long-term outcome with catheter abla-tion of ventricular tachycardia in patients with arrhythmogenic right ventricular cardiomyopathy. Circ Arrhythm Electrophysiol 2015;8:1413–1421. 217. Mallidi J, Nadkarni GN, Berger RD, Calkins H, Nazarian S. Meta-analysis of catheter ablation as an adjunct to medical therapy for treatment of ventricular tachycardia in patients with structural heart disease. Heart Rhythm 2011; 8:503–510. 218. Philips B, te Riele AS, Sawant A, et al. Outcomes and ventricular tachycardia recurrence characteristics after epicardial ablation of ventricular tachycardia in arrhythmogenic right ventricular dysplasia/cardiomyopathy. Heart Rhythm 2015;12:716–725. 219. Bai R, Di Biase L, Shivkumar K, et al. Ablation of ventricular arrhythmias in arrhythmogenic right ventricular dysplasia/cardiomyopathy: arrhythmia-free survival after endo-epicardial substrate based mapping and ablation. Circ Ar-rhythm Electrophysiol 2011;4:478–485. 220. Berruezo A, Acosta J, Fernandez-Armenta J, et al. Safety, long-term outcomes and predictors of recurrence after first-line combined endoepicardial ventricular tachycardia substrate ablation in arrhythmogenic cardiomyopathy. Impact of arrhythmic substrate distribution pattern. A prospective multicentre study. Euro-pace 2017;19:607–616. 221. Dalal D, Jain R, Tandri H, et al. Long-term efficacy of catheter ablation of ven-tricular tachycardia in patients with arrhythmogenic right ventricular dysplasia/ cardiomyopathy. J Am Coll Cardiol 2007;50:432–440. 222. Garcia FC, Bazan V, Zado ES, Ren JF, Marchlinski FE. Epicardial substrate and outcome with epicardial ablation of ventricular tachycardia in arrhythmogenic right ventricular cardiomyopathy/dysplasia. Circulation 2009;120:366–375. 223. Reddy VY, Reynolds MR, Neuzil P, et al. Prophylactic catheter ablation for the prevention of defibrillator therapy. N Engl J Med 2007;357:2657–2665. 224. Marchlinski FE, Haffajee CI, Beshai JF, et al. Long-term success of irrigated ra-diofrequency catheter ablation of sustained ventricular tachycardia: post-approval THERMOCOOL VT trial. J Am Coll Cardiol 2016;67:674–683. 225. Stevenson WG, Wilber DJ, Natale A, et al. Irrigated radiofrequency catheter ablation guided by electroanatomic mapping for recurrent ventricular tachy-cardia after myocardial infarction: the multicenter THERMOCOOL ventricular tachycardia ablation trial. Circulation 2008;118:2773–2782. 226. Zeppenfeld K, Stevenson WG. Ablation of ventricular tachycardia in patients with structural heart disease. Pacing Clin Electrophysiol 2008;31:358–374. 227. Stevenson WG, Soejima K. Catheter ablation for ventricular tachycardia. Circu-lation 2007;115:2750–2760. 228. Soejima K, Stevenson WG, Sapp JL, Selwyn AP, Couper G, Epstein LM. Endo-cardial and epicardial radiofrequency ablation of ventricular tachycardia associ-ated with dilated cardiomyopathy: the importance of low-voltage scars. J Am Coll Cardiol 2004;43:1834–1842. 229. Calkins H, Epstein A, Packer D, et al. Catheter ablation of ventricular tachy-cardia in patients with structural heart disease using cooled radiofrequency en-ergy: results of a prospective multicenter study. Cooled RF Multi Center Investigators Group. J Am Coll Cardiol 2000;35:1905–1914. 230. Kumar S, Androulakis AF, Sellal JM, et al. Multicenter experience with catheter ablation for ventricular tachycardia in lamin A/C cardiomyopathy. Circ Ar-rhythm Electrophysiol 2016;9:e004357. 231. Honarbakhsh S, Suman-Horduna I, Mantziari L, Ernst S. Successful right ven-tricular tachycardia ablation in a patient with left ventricular non-compaction cardiomyopathy. Indian Pacing Electrophysiol J 2013;13:181–184. Towbin et al Evaluation, Risk Stratification, and Management of ACM e359 232. Jackson N, King B, Viswanathan K, Downar E, Spears D. Case report: ablation of diffuse inter-trabecular substrate in a patient with isolated ventricular non-compaction. Indian Pacing Electrophysiol J 2015;15:162–164. 233. Chung FP, Lin YJ, Kuo L, Chen SA. Catheter ablation of ventricular tachy-cardia/fibrillation in a patient with right ventricular amyloidosis with initial man-ifestations mimicking arrhythmogenic right ventricular dysplasia/ cardiomyopathy. Korean Circ J 2017;47:282–285. 234. Mlcochova H, Saliba WI, Burkhardt DJ, et al. Catheter ablation of ventricular fibrillation storm in patients with infiltrative amyloidosis of the heart. J Cardio-vasc Electrophysiol 2006;17:426–430. 235. Magage S, Linhart A, Bultas J, et al. Fabry disease: percutaneous transluminal septal myocardial ablation markedly improved symptomatic left ventricular hy-pertrophy and outflow tract obstruction in a classically affected male. Echocar-diography 2005;22:333–339. 236. Berruezo A, Acosta J, Fernandez-Armenta J. Epicardial ablation may not be necessary in all patients with arrhythmogenic right ventricular dysplasia/cardio-myopathy and frequent ventricular tachycardia: author’s reply. Europace 2017; 19:2047–2048. 237. Philips B, Madhavan S, James C, et al. Outcomes of catheter ablation of ventric-ular tachycardia in arrhythmogenic right ventricular dysplasia/cardiomyopathy. Circ Arrhythm Electrophysiol 2012;5:499–505. 238. Fontaine G. Arrhythmogenic right ventricular dysplasia. Curr Opin Cardiol 1995;10:16–20. 239. Thiene G, Nava A, Corrado D, Rossi L, Pennelli N. Right ventricular cardiomy-opathy and sudden death in young people. N Engl J Med 1988;318:129–133. 240. Corrado D, Basso C, Rizzoli G, Schiavon M, Thiene G. Does sports activity enhance the risk of sudden death in adolescents and young adults? J Am Coll Cardiol 2003;42:1959–1963. 241. Corrado D, Basso C, Pavei A, Michieli P, Schiavon M, Thiene G. Trends in sud-den cardiovascular death in young competitive athletes after implementation of a preparticipation screening program. JAMA 2006;296:1593–1601. 242. Chelko SP, Asimaki A, Andersen P, et al. Central role for GSK3beta in the path-ogenesis of arrhythmogenic cardiomyopathy. JCI Insight 2016;1. pii:85923. 243. Kirchhof P, Fabritz L, Zwiener M, et al. Age- and training-dependent develop-ment of arrhythmogenic right ventricular cardiomyopathy in heterozygous plakoglobin-deficient mice. Circulation 2006;114:1799–1806. 244. Martherus R, Jain R, Takagi K, et al. Accelerated cardiac remodeling in desmo-plakin transgenic mice in response to endurance exercise is associated with per-turbed Wnt/beta-catenin signaling. Am J Physiol Heart Circ Physiol 2016; 310:H174–H187. 245. Cerrone M, Montnach J, Lin X, et al. Plakophilin-2 is required for transcription of genes that control calcium cycling and cardiac rhythm. Nat Commun 2017;8:106. 246. Cruz FM, Sanz-Rosa D, Roche-Molina M, et al. Exercise triggers ARVC pheno-type in mice expressing a disease-causing mutated version of human plakophi-lin-2. J Am Coll Cardiol 2015;65:1438–1450. 247. Strath SJ, Kaminsky LA, Ainsworth BE, et al. Guide to the assessment of phys-ical activity: clinical and research applications: a scientific statement from the American Heart Association. Circulation 2013;128:2259–2279. 248. Levine BD, Baggish AL, Kovacs RJ, Link MS, Maron MS, Mitchell JH. Eligi-bility and disqualification recommendations for competitive athletes with cardio-vascular abnormalities: Task Force 1: classification of sports: dynamic, static, and impact: a scientific statement from the American Heart Association and American College of Cardiology. Circulation 2015;132:e262–e266. 249. Maron BJ, Zipes DP, Kovacs RJ. Eligibility and disqualification recommendations for competitive athletes with cardiovascular abnormalities: preamble, principles, and general considerations: a scientific statement from the American Heart Asso-ciation and American College of Cardiology. Circulation 2015;132:e256–e261. 250. Haskell WL, Lee IM, Pate RR, et al. Physical activity and public health: updated recommendation for adults from the American College of Sports Medicine and the American Heart Association. Med Sci Sports Exerc 2007;39:1423–1434. 251. Garber CE, Blissmer B, Deschenes MR, et al. American College of Sports Med-icine position stand. Quantity and quality of exercise for developing and main-taining cardiorespiratory, musculoskeletal, and neuromotor fitness in apparently healthy adults: guidance for prescribing exercise. Med Sci Sports Exerc 2011; 43:1334–1359. 252. Ainsworth BE, Haskell WL, Herrmann SD, et al. 2011 Compendium of physical activities: a second update of codes and MET values. Med Sci Sports Exerc 2011;43:1575–1581. 253. James CA, Bhonsale A, Tichnell C, et al. Exercise increases age-related pene-trance and arrhythmic risk in arrhythmogenic right ventricular dysplasia/ cardiomyopathy-associated desmosomal mutation carriers. J Am Coll Cardiol 2013;62:1290–1297. 254. Sawant AC, Te Riele AS, Tichnell C, et al. Safety of American Heart Association-recommended minimum exercise for desmosomal mutation car-riers. Heart Rhythm 2016;13:199–207. 255. Saberniak J, Hasselberg NE, Borgquist R, et al. Vigorous physical activity im-pairs myocardial function in patients with arrhythmogenic right ventricular car-diomyopathy and in mutation positive family members. Eur J Heart Fail 2014; 16:1337–1344. 256. Sawant AC, Bhonsale A, te Riele AS, et al. Exercise has a disproportionate role in the pathogenesis of arrhythmogenic right ventricular dysplasia/cardiomyopathy in patients without desmosomal mutations. J Am Heart Assoc 2014;3:e001471. 257. La Gerche A, Robberecht C, Kuiperi C, et al. Lower than expected desmosomal gene mutation prevalence in endurance athletes with complex ventricular ar-rhythmias of right ventricular origin. Heart 2010;96:1268–1274. 258. Ruwald AC, Marcus F, Estes NA 3rd, et al. Association of competitive and rec-reational sport participation with cardiac events in patients with arrhythmogenic right ventricular cardiomyopathy: results from the North American multidisci-plinary study of arrhythmogenic right ventricular cardiomyopathy. Eur Heart J 2015;36:1735–1743. 259. Lie OH, Dejgaard LA, Saberniak J, et al. Harmful Effects of exercise intensity and exercise duration in patients with arrhythmogenic cardiomyopathy. JACC Clin Electrophysiol 2018;4:744–753. 260. Gupta R, Tichnell C, Murray B, et al. Comparison of features of fatal versus nonfatal cardiac arrest in patients with arrhythmogenic right ventricular dysplasia/cardiomyopathy. Am J Cardiol 2017;120:111–117. 261. Corrado D, Basso C, Thiene G, et al. Spectrum of clinicopathologic manifesta-tions of arrhythmogenic right ventricular cardiomyopathy/dysplasia: a multi-center study. J Am Coll Cardiol 1997;30:1512–1520. 262. Wang W, Cadrin-Tourigny J, Bhonsale A, et al. Arrhythmic outcome of arrhyth-mogenic right ventricular cardiomyopathy patients without implantable defibril-lators. J Cardiovasc Electrophysiol 2018;29:1396–1402. 263. Agullo-Pascual E, Cerrone M, Delmar M. Arrhythmogenic cardiomyopathy and Brugada syndrome: diseases of the connexome. FEBS Lett 2014; 588:1322–1330. 264. Moncayo-Arlandi J, Brugada R. Unmasking the molecular link between arrhyth-mogenic cardiomyopathy and Brugada syndrome. Nat Rev Cardiol 2017; 14:744–756. 265. Gerull B, Kirchner F, Chong JX, et al. Homozygous founder mutation in desmocollin-2 (DSC2) causes arrhythmogenic cardiomyopathy in the Hutterite population. Circ Cardiovasc Genet 2013;6:327–336. 266. Beffagna G, Occhi G, Nava A, et al. Regulatory mutations in transforming growth factor-beta3 gene cause arrhythmogenic right ventricular cardiomyopa-thy type 1. Cardiovasc Res 2005;65:366–373. 267. Lazzarini E, Jongbloed JD, Pilichou K, et al. The ARVD/C genetic variants data-base: 2014 update. Hum Mutat 2015;36:403–410. 268. Sheikh F, Ross RS, Chen J. Cell-cell connection to cardiac disease. Trends Car-diovasc Med 2009;19:182–190. 269. Franke WW, Borrmann CM, Grund C, Pieperhoff S. The area composita of adhering junctions connecting heart muscle cells of vertebrates. I. Molecular definition in intercalated disks of cardiomyocytes by immunoelectron micro-scopy of desmosomal proteins. Eur J Cell Biol 2006;85:69–82. 270. Balse E, Steele DF, Abriel H, Coulombe A, Fedida D, Hatem SN. Dynamic of ion channel expression at the plasma membrane of cardiomyocytes. Physiol Rev 2012;92:1317–1358. 271. Knudsen KA, Wheelock MJ. Plakoglobin, or an 83-kD homologue distinct from beta-catenin, interacts with E-cadherin and N-cadherin. J Cell Biol 1992; 118:671–679. 272. Borrmann CM, Grund C, Kuhn C, Hofmann I, Pieperhoff S, Franke WW. The area composita of adhering junctions connecting heart muscle cells of verte-brates. II. Colocalizations of desmosomal and fascia adhaerens molecules in the intercalated disk. Eur J Cell Biol 2006;85:469–485. 273. Sacco PA, McGranahan TM, Wheelock MJ, Johnson KR. Identification of pla-koglobin domains required for association with N-cadherin and alpha-catenin. J Biol Chem 1995;270:20201–20206. 274. Kostetskii I, Li J, Xiong Y, et al. Induced deletion of the N-cadherin gene in the heart leads to dissolution of the intercalated disc structure. Circ Res 2005; 96:346–354. 275. Li J, Patel VV, Kostetskii I, et al. Cardiac-specific loss of N-cadherin leads to alteration in connexins with conduction slowing and arrhythmogenesis. Circ Res 2005;97:474–481. 276. Li J, Levin MD, Xiong Y, Petrenko N, Patel VV, Radice GL. N-cadherin hap-loinsufficiency affects cardiac gap junctions and arrhythmic susceptibility. J Mol Cell Cardiol 2008;44:597–606. 277. Chen SN, Gurha P, Lombardi R, Ruggiero A, Willerson JT, Marian AJ. The hip-po pathway is activated and is a causal mechanism for adipogenesis in arrhyth-mogenic cardiomyopathy. Circ Res 2014;114:454–468. 278. Tse G. Mechanisms of cardiac arrhythmias. J Arrhythm 2016;32:75–81. 279. Lakatta EG, Vinogradova T, Lyashkov A, et al. The integration of spontaneous intracellular Ca21 cycling and surface membrane ion channel activation e360 Heart Rhythm, Vol 16, No 11, November 2019 entrains normal automaticity in cells of the heart’s pacemaker. Ann N Y Acad Sci 2006;1080:178–206. 280. Baruscotti M, Bucchi A, Difrancesco D. Physiology and pharmacology of the cardiac pacemaker ("funny") current. Pharmacol Ther 2005;107:59–79. 281. Vinogradova TM, Maltsev VA, Bogdanov KY, Lyashkov AE, Lakatta EG. Rhythmic Ca21 oscillations drive sinoatrial nodal cell pacemaker function to make the heart tick. Ann N Y Acad Sci 2005;1047:138–156. 282. Groenke S, Larson ED, Alber S, et al. Complete atrial-specific knockout of sodium-calcium exchange eliminates sinoatrial node pacemaker activity. PLoS One 2013;8:e81633. 283. Hamosh A, Scott AF, Amberger J, Bocchini C, Valle D, McKusick VA. Online Mendelian Inheritance in Man (OMIM), a knowledgebase of human genes and genetic disorders. Nucleic Acids Res 2002;30:52–55. 284. Aronsen JM, Swift F, Sejersted OM. Cardiac sodium transport and excitation-contraction coupling. J Mol Cell Cardiol 2013;61:11–19. 285. Kyle JW, Makielski JC. Diseases caused by mutations in Nav1.5 interacting pro-teins. Card Electrophysiol Clin 2014;6:797–809. 286. Shi R, Zhang Y, Yang C, et al. The cardiac sodium channel mutation delQKP 1507-1509 is associated with the expanding phenotypic spectrum of LQT3, con-duction disorder, dilated cardiomyopathy, and high incidence of youth sudden death. Europace 2008;10:1329–1335. 287. Cerrone M, Delmar M. Desmosomes and the sodium channel complex: implica-tions for arrhythmogenic cardiomyopathy and Brugada syndrome. Trends Car-diovasc Med 2014;24:184–190. 288. Maron BJ, Towbin JA, Thiene G, et al. Contemporary definitions and classifica-tion of the cardiomyopathies: an American Heart Association Scientific Statement from the Council on Clinical Cardiology, Heart Failure and Transplantation Com-mittee; Quality of Care and Outcomes Research and Functional Genomics and Translational Biology Interdisciplinary Working Groups; and Council on Epide-miology and Prevention. Circulation 2006;113:1807–1816. 289. Groenewegen WA, Firouzi M, Bezzina CR, et al. A cardiac sodium channel mu-tation cosegregates with a rare connexin40 genotype in familial atrial standstill. Circ Res 2003;92:14–22. 290. Olson TM, Michels VV, Ballew JD, et al. Sodium channel mutations and suscep-tibility to heart failure and atrial fibrillation. JAMA 2005;293:447–454. 291. Shan L, Makita N, Xing Y, et al. SCN5A variants in Japanese patients with left ventricular noncompaction and arrhythmia. Mol Genet Metab 2008;93:468–474. 292. Beckermann TM, McLeod K, Murday V, Potet F, George AL Jr. Novel SCN5A mutation in amiodarone-responsive multifocal ventricular ectopy-associated car-diomyopathy. Heart Rhythm 2014;11:1446–1453. 293. Sato PY, Musa H, Coombs W, et al. Loss of plakophilin-2 expression leads to decreased sodium current and slower conduction velocity in cultured cardiac myocytes. Circ Res 2009;105:523–526. 294. Cerrone M, Lin X, Zhang M, et al. Missense mutations in plakophilin-2 cause sodium current deficit and associate with a Brugada syndrome phenotype. Cir-culation 2014;129:1092–1103. 295. Te Riele AS, Agullo-Pascual E, James CA, et al. Multilevel analyses of SCN5A mutations in arrhythmogenic right ventricular dysplasia/cardiomyopathy sug-gest non-canonical mechanisms for disease pathogenesis. Cardiovasc Res 2017;113:102–111. 296. Leo-Macias A, Agullo-Pascual E, Sanchez-Alonso JL, et al. Nanoscale visuali-zation of functional adhesion/excitability nodes at the intercalated disc. Nat Commun 2016;7:10342. 297. Spezzacatene A, Sinagra G, Merlo M, et al. Arrhythmogenic phenotype in dilated cardiomyopathy: natural history and predictors of life-threatening ar-rhythmias. J Am Heart Assoc 2015;4:e002149. 298. Bang ML, Chen J. Roles of nebulin family members in the heart. Circ J 2015; 79:2081–2087. 299. Frank D, Frey N. Cardiac Z-disc signaling network. J Biol Chem 2011; 286:9897–9904. 300. Frank D, Kuhn C, Katus HA, Frey N. The sarcomeric Z-disc: a nodal point in signalling and disease. J Mol Med (Berl) 2006;84:446–468. 301. Knoll R, Buyandelger B, Lab M. The sarcomeric Z-disc and Z-discopathies. J Biomed Biotechnol 2011;2011:569628. 302. Beggs AH, Byers TJ, Knoll JH, Boyce FM, Bruns GA, Kunkel LM. Cloning and characterization of two human skeletal muscle alpha-actinin genes located on chromosomes 1 and 11. J Biol Chem 1992;267:9281–9288. 303. Ribeiro Ede A Jr, Pinotsis N, Ghisleni A, et al. The structure and regulation of human muscle alpha-actinin. Cell 2014;159:1447–1460. 304. Knoll R, Buyandelger B. Z-disc transcriptional coupling, sarcomeroptosis and mechanoptosis [corrected]. Cell Biochem Biophys 2013;66:65–71. 305. Luther PK. The vertebrate muscle Z-disc: sarcomere anchor for structure and sig-nalling. J Muscle Res Cell Motil 2009;30:171–185. 306. Sjoblom B, Salmazo A, Djinovic-Carugo K. Alpha-actinin structure and regula-tion. Cell Mol Life Sci 2008;65:2688–2701. 307. Murphy AC, Young PW. The actinin family of actin cross-linking proteins—a genetic perspective. Cell Biosci 2015;5:49. 308. Thompson TG, Chan YM, Hack AA, et al. Filamin 2 (FLN2): a muscle-specific sarcoglycan interacting protein. J Cell Biol 2000;148:115–126. 309. Gontier Y, Taivainen A, Fontao L, et al. The Z-disc proteins myotilin and FATZ-1 interact with each other and are connected to the sarcolemma via muscle-specific filamins. J Cell Sci 2005;118:3739–3749. 310. van der Ven PF, Wiesner S, Salmikangas P, et al. Indications for a novel muscular dystrophy pathway. gamma-filamin, the muscle-specific filamin iso-form, interacts with myotilin. J Cell Biol 2000;151:235–248. 311. Faulkner G, Pallavicini A, Comelli A, et al. FATZ, a filamin-, actinin-, and telethonin-binding protein of the Z-disc of skeletal muscle. J Biol Chem 2000; 275:41234–41242. 312. Takada F, Vander Woude DL, Tong HQ, et al. Myozenin: an alpha-actinin- and gamma-filamin-binding protein of skeletal muscle Z lines. Proc Natl Acad Sci U S A 2001;98:1595–1600. 313. Kley RA, Hellenbroich Y, van der Ven PF, et al. Clinical and morphological phenotype of the filamin myopathy: a study of 31 German patients. Brain 2007;130:3250–3264. 314. Vorgerd M, van der Ven PF, Bruchertseifer V, et al. A mutation in the dimeriza-tion domain of filamin c causes a novel type of autosomal dominant myofibrillar myopathy. Am J Hum Genet 2005;77:297–304. 315. Faulkner G, Pallavicini A, Formentin E, et al. ZASP: a new Z-band alternatively spliced PDZ-motif protein. J Cell Biol 1999;146:465–475. 316. Klaavuniemi T, Ylanne J. Zasp/Cypher internal ZM-motif containing fragments are sufficient to co-localize with alpha-actinin—analysis of patient mutations. Exp Cell Res 2006;312:1299–1311. 317. Zhou Q, Chu PH, Huang C, et al. Ablation of Cypher, a PDZ-LIM domain Z-line protein, causes a severe form of congenital myopathy. J Cell Biol 2001; 155:605–612. 318. Zheng M, Cheng H, Li X, et al. Cardiac-specific ablation of Cypher leads to a severe form of dilated cardiomyopathy with premature death. Hum Mol Genet 2009;18:701–713. 319. Ziane R, Huang H, Moghadaszadeh B, Beggs AH, Levesque G, Chahine M. Cell membrane expression of cardiac sodium channel Na(v)1.5 is modulated by alpha-actinin-2 interaction. Biochemistry 2010;49:166–178. 320. Arimura T, Hayashi T, Terada H, et al. A Cypher/ZASP mutation associated with dilated cardiomyopathy alters the binding affinity to protein kinase C. J Biol Chem 2004;279:6746–6752. 321. Xi Y, Ai T, De Lange E, et al. Loss of function of hNav1.5 by a ZASP1 mutation associated with intraventricular conduction disturbances in left ventricular non-compaction. Circ Arrhythm Electrophysiol 2012;5:1017–1026. 322. Scriven DR, Dan P, Moore ED. Distribution of proteins implicated in excitation-contraction coupling in rat ventricular myocytes. Biophys J 2000; 79:2682–2691. 323. Brette F, Orchard CH. Density and sub-cellular distribution of cardiac and neuronal sodium channel isoforms in rat ventricular myocytes. Biochem Bio-phys Res Commun 2006;348:1163–1166. 324. Ylanne J, Scheffzek K, Young P, Saraste M. Crystal structure of the alpha-actinin rod reveals an extensive torsional twist. Structure 2001; 9:597–604. 325. Perz-Edwards RJ, Reedy MK. Electron microscopy and x-ray diffraction evi-dence for two Z-band structural states. Biophys J 2011;101:709–717. 326. Cukovic D, Lu GW, Wible B, Steele DF, Fedida D. A discrete amino terminal domain of Kv1.5 and Kv1.4 potassium channels interacts with the spectrin re-peats of alpha-actinin-2. FEBS Lett 2001;498:87–92. 327. Maruoka ND, Steele DF, Au BP, et al. alpha-actinin-2 couples to cardiac Kv1.5 channels, regulating current density and channel localization in HEK cells. FEBS Lett 2000;473:188–194. 328. Lu L, Zhang Q, Timofeyev V, et al. Molecular coupling of a Ca21-activated K1 channel to L-type Ca21 channels via alpha-actinin2. Circ Res 2007; 100:112–120. 329. Bagnall RD, Molloy LK, Kalman JM, Semsarian C. Exome sequencing iden-tifies a mutation in the ACTN2 gene in a family with idiopathic ventricular fibril-lation, left ventricular noncompaction, and sudden death. BMC Med Genet 2014;15:99. 330. Girolami F, Iascone M, Tomberli B, et al. Novel alpha-actinin 2 variant associated with familial hypertrophic cardiomyopathy and juvenile atrial arrhythmias: a massively parallel sequencing study. Circ Cardiovasc Genet 2014;7:741–750. 331. Kostin S, Scholz D, Shimada T, et al. The internal and external protein scaf-fold of the T-tubular system in cardiomyocytes. Cell Tissue Res 1998; 294:449–460. 332. Solaro RJ, Van Eyk J. Altered interactions among thin filament proteins modu-late cardiac function. J Mol Cell Cardiol 1996;28:217–230. Towbin et al Evaluation, Risk Stratification, and Management of ACM e361 333. Ross RS. The extracellular connections: the role of integrins in myocardial re-modeling. J Card Fail 2002;8:S326–S331. 334. Korte FS, McDonald KS, Harris SP, Moss RL. Loaded shortening, power output, and rate of force redevelopment are increased with knockout of cardiac myosin binding protein-C. Circ Res 2003;93:752–758. 335. Capetanaki Y. Desmin cytoskeleton: a potential regulator of muscle mitochon-drial behavior and function. Trends Cardiovasc Med 2002;12:339–348. 336. Brodehl A, Dieding M, Klauke B, et al. The novel desmin mutant p.A120D im-pairs filament formation, prevents intercalated disk localization, and causes sud-den cardiac death. Circ Cardiovasc Genet 2013;6:615–623. 337. Berm udez-Jiménez FJ, Carriel V, Brodehl A, et al. Novel desmin mutation p.Glu401Asp impairs filament formation, disrupts cell membrane integrity, and causes severe arrhythmogenic left ventricular cardiomyopathy/dysplasia. Circulation 2018;137:1595–1610. 338. Levin J, Bulst S, Thirion C, et al. Divergent molecular effects of desmin muta-tions on protein assembly in myofibrillar myopathy. J Neuropathol Exp Neurol 2010;69:415–424. 339. McNally EM, Mestroni L. Dilated cardiomyopathy: genetic determinants and mechanisms. Circ Res 2017;121:731–748. 340. Klauke B, Kossmann S, Gaertner A, et al. De novo desmin-mutation N116S is associated with arrhythmogenic right ventricular cardiomyopathy. Hum Mol Genet 2010;19:4595–4607. 341. Brodehl A, Hedde PN, Dieding M, et al. Dual color photoactivation localization microscopy of cardiomyopathy-associated desmin mutants. J Biol Chem 2012; 287:16047–16057. 342. van Spaendonck-Zwarts KY, van der Kooi AJ, van den Berg MP, et al. Recurrent and founder mutations in the Netherlands: the cardiac phenotype of DES founder mutations p.S13F and p.N342D. Neth Heart J 2012;20:219–228. 343. Dalakas MC, Park KY, Semino-Mora C, Lee HS, Sivakumar K, Goldfarb LG. Desmin myopathy, a skeletal myopathy with cardiomyopathy caused by muta-tions in the desmin gene. N Engl J Med 2000;342:770–780. 344. Otten E, Asimaki A, Maass A, et al. Desmin mutations as a cause of right ven-tricular heart failure affect the intercalated disks. Heart Rhythm 2010; 7:1058–1064. 345. Seidman CE, Seidman JG. Identifying sarcomere gene mutations in hypertro-phic cardiomyopathy: a personal history. Circ Res 2011;108:743–750. 346. Alfares AA, Kelly MA, McDermott G, et al. Results of clinical genetic testing of 2,912 probands with hypertrophic cardiomyopathy: expanded panels offer limited additional sensitivity. Genet Med 2015;17:880–888. 347. Ingles J, Burns C, Bagnall RD, et al. Nonfamilial hypertrophic cardiomyopathy: prevalence, natural history, and clinical implications. Circ Cardiovasc Genet 2017;10. pii:e001620. 348. Lek M, Karczewski KJ, Minikel EV, et al. Analysis of protein-coding genetic variation in 60,706 humans. Nature 2016;536:285–291. 349. Miller EM, Hinton RB, Czosek R, et al. Genetic testing in pediatric left ventric-ular noncompaction. Circ Cardiovasc Genet 2017;10. 350. Probst S, Oechslin E, Schuler P, et al. Sarcomere gene mutations in isolated left ventricular noncompaction cardiomyopathy do not predict clinical phenotype. Circ Cardiovasc Genet 2011;4:367–374. 351. Kaski JP, Syrris P, Burch M, et al. Idiopathic restrictive cardiomyopathy in chil-dren is caused by mutations in cardiac sarcomere protein genes. Heart 2008; 94:1478–1484. 352. Ko C, Arscott P, Concannon M, et al. Genetic testing impacts the utility of pro-spective familial screening in hypertrophic cardiomyopathy through identifica-tion of a nonfamilial subgroup. Genet Med 2018;20:69–75. 353. van Waning JI, Caliskan K, Hoedemaekers YM, et al. Genetics, clinical features, and long-term outcome of noncompaction cardiomyopathy. J Am Coll Cardiol 2018;71:711–722. 354. Wang C, Hata Y, Hirono K, et al. A wide and specific spectrum of genetic var-iants and genotype-phenotype correlations revealed by next-generation sequencing in patients with left ventricular noncompaction. J Am Heart Assoc 2017;6:e006210. 355. Bonnet D, Martin D, Pascale De L, et al. Arrhythmias and conduction defects as presenting symptoms of fatty acid oxidation disorders in children. Circulation 1999;100:2248–2253. 356. DaTorre SD, Creer MH, Pogwizd SM, Corr PB. Amphipathic lipid metabolites and their relation to arrhythmogenesis in the ischemic heart. J Mol Cell Cardiol 1991;23(Suppl 1):11–22. 357. Arita M, Sato T, Ishida H, Nakazawa H. [Cellular electrophysiological basis of proarrhythmic and antiarrhythmic effects of ischemia-related lipid metabolites]. Rinsho Byori 1993;41:401–408. 358. Huang JM, Xian H, Bacaner M. Long-chain fatty acids activate calcium channels in ventricular myocytes. Proc Natl Acad Sci U S A 1992; 89:6452–6456. 359. Schmilinsky-Fluri G, Valiunas V, Willi M, Weingart R. Modulation of cardiac gap junctions: the mode of action of arachidonic acid. J Mol Cell Cardiol 1997; 29:1703–1713. 360. Frigeni M, Balakrishnan B, Yin X, et al. Functional and molecular studies in pri-mary carnitine deficiency. Hum Mutat 2017;38:1684–1699. 361. Longo N, Frigeni M, Pasquali M. Carnitine transport and fatty acid oxidation. Biochim Biophys Acta 2016;1863:2422–2435. 362. Holmgren D, Wahlander H, Eriksson BO, Oldfors A, Holme E, Tulinius M. Car-diomyopathy in children with mitochondrial disease; clinical course and cardio-logical findings. Eur Heart J 2003;24:280–288. 363. Debray FG, Lambert M, Chevalier I, et al. Long-term outcome and clinical spec-trum of 73 pediatric patients with mitochondrial diseases. Pediatrics 2007; 119:722–733. 364. El-Hattab AW, Scaglia F. Mitochondrial cytopathies. Cell Calcium 2016; 60:199–206. 365. Munnich A, Rotig A, Chretien D, et al. Clinical presentation of mitochondrial disorders in childhood. J Inherit Metab Dis 1996;19:521–527. 366. Jackson MJ, Schaefer JA, Johnson MA, Morris AA, Turnbull DM, Bindoff LA. Presentation and clinical investigation of mitochondrial respiratory chain dis-ease. A study of 51 patients. Brain 1995;118(Pt 2):339–357. 367. DiMauro S, Bonilla E, De Vivo DC. Does the patient have a mitochondrial en-cephalomyopathy? J Child Neurol 1999;14(Suppl 1):S23–35. 368. Koenig MK. Presentation and diagnosis of mitochondrial disorders in children. Pediatr Neurol 2008;38:305–313. 369. Petty RK, Harding AE, Morgan-Hughes JA. The clinical features of mitochon-drial myopathy. Brain 1986;109(Pt 5):915–938. 370. Hsu CH, Kwon H, Perng CL, Bai RK, Dai P, Wong LJ. Hearing loss in mito-chondrial disorders. Ann N Y Acad Sci 2005;1042:36–47. 371. Wahbi K, Larue S, Jardel C, et al. Cardiac involvement is frequent in patients with the m.8344A.G mutation of mitochondrial DNA. Neurology 2010;74:674–677. 372. Anan R, Nakagawa M, Miyata M, et al. Cardiac involvement in mitochondrial diseases. A study on 17 patients with documented mitochondrial DNA defects. Circulation 1995;91:955–961. 373. Stollberger C, Finsterer J, Blazek G. Left ventricular hypertrabeculation/non-compaction and association with additional cardiac abnormalities and neuro-muscular disorders. Am J Cardiol 2002;90:899–902. 374. Thavendiranathan P, Dahiya A, Phelan D, Desai MY, Tang WH. Isolated left ventricular non-compaction controversies in diagnostic criteria, adverse out-comes and management. Heart 2013;99:681–689. 375. Kohli SK, Pantazis AA, Shah JS, et al. Diagnosis of left-ventricular non-compaction in patients with left-ventricular systolic dysfunction: time for a reap-praisal of diagnostic criteria? Eur Heart J 2008;29:89–95. 376. Arbustini E, Diegoli M, Fasani R, et al. Mitochondrial DNA mutations and mito-chondrial abnormalities in dilated cardiomyopathy. Am J Pathol 1998; 153:1501–1510. 377. Huss JM, Kelly DP. Mitochondrial energy metabolism in heart failure: a ques-tion of balance. J Clin Invest 2005;115:547–555. 378. Rustin P, Chretien D, Bourgeron T, et al. Assessment of the mitochondrial res-piratory chain. Lancet 1991;338:60. 379. Chinnery P, Majamaa K, Turnbull D, Thorburn D. Treatment for mitochondrial disorders. Cochrane Database Syst Rev 2006;CD004426. 380. Martin DS, Grocott MP. Oxygen therapy in critical illness: precise control of arte-rial oxygenation and permissive hypoxemia. Crit Care Med 2013;41:423–432. 381. Koga Y, Povalko N, Nishioka J, Katayama K, Kakimoto N, Matsuishi T. MELAS and L-arginine therapy: pathophysiology of stroke-like episodes. Ann N Y Acad Sci 2010;1201:104–110. 382. Golden AS, Law YM, Shurtleff H, Warner M, Saneto RP. Mitochondrial elec-tron transport chain deficiency, cardiomyopathy, and long-term cardiac trans-plant outcome. Pediatr Transplant 2012;16:265–268. 383. Khambatta S, Nguyen DL, Beckman TJ, Wittich CM. Kearns-Sayre syndrome: a case series of 35 adults and children. Int J Gen Med 2014;7:325–332. 384. Kabunga P, Lau AK, Phan K, et al. Systematic review of cardiac electrical dis-ease in Kearns-Sayre syndrome and mitochondrial cytopathy. Int J Cardiol 2015; 181:303–310. 385. Davis RL, Sue CM. The genetics of mitochondrial disease. Semin Neurol 2011; 31:519–530. 386. Bove KE, Schwartz DC. Focal lipid cardiomyopathy in an infant with parox-ysmal atrial tachycardia. Arch Pathol 1973;95:26–36. 387. Ferrans VJ, McAllister HA Jr, Haese WH. Infantile cardiomyopathy with histio-cytoid change in cardiac muscle cells. Report of six patients. Circulation 1976; 53:708–719. 388. Shehata BM, Patterson K, Thomas JE, Scala-Barnett D, Dasu S, Robinson HB. Histiocytoid cardiomyopathy: three new cases and a review of the literature. Pe-diatr Dev Pathol 1998;1:56–69. e362 Heart Rhythm, Vol 16, No 11, November 2019 389. Shehata BM, Bouzyk M, Shulman SC, et al. Identification of candidate genes for histiocytoid cardiomyopathy (HC) using whole genome expression analysis: analyzing material from the HC registry. Pediatr Dev Pathol 2011;14:370–377. 390. Gelb AB, Van Meter SH, Billingham ME, Berry GJ, Rouse RV. Infantile histio-cytoid cardiomyopathy—myocardial or conduction system hamartoma: what is the cell type involved? Hum Pathol 1993;24:1226–1231. 391. Zimmermann A, Diem P, Cottier H. Congenital "histiocytoid" cardiomyopathy: evidence suggesting a developmental disorder of the Purkinje cell system of the heart. Virchows Arch A Pathol Anat Histol 1982;396:187–195. 392. Malhotra V, Ferrans VJ, Virmani R. Infantile histiocytoid cardiomyopathy: three cases and literature review. Am Heart J 1994;128:1009–1021. 393. MacMahon HE. Infantile xanthomatous cardiomyopathy. Pediatrics 1971; 48:312–315. 394. Andreu AL, Checcarelli N, Iwata S, Shanske S, DiMauro S. A missense muta-tion in the mitochondrial cytochrome b gene in a revisited case with histiocytoid cardiomyopathy. Pediatr Res 2000;48:311–314. 395. Vallance P. Nitric oxide synthesised from L-arginine mediates endothelium dependent dilatation in human veins in vivo. Cardiovasc Res 2000;45:143–147. 396. Ruszkiewicz AR, Vernon-Roberts E. Sudden death in an infant due to histiocy-toid cardiomyopathy. A light-microscopic, ultrastructural, and immunohisto-chemical study. Am J Forensic Med Pathol 1995;16:74–80. 397. Prahlow JA, Teot LA. Histiocytoid cardiomyopathy: case report and literature review. J Forensic Sci 1993;38:1427–1435. 398. Heifetz SA, Faught PR, Bauman M. Pathological case of the month. Histiocytoid (oncocytic) cardiomyopathy. Arch Pediatr Adolesc Med 1995;149:464–465. 399. Suarez V, Fuggle WJ, Cameron AH, French TA, Hollingworth T. Foamy myocardial transformation of infancy: an inherited disease. J Clin Pathol 1987;40:329–334. 400. Franciosi RA, Singh A. Oncocytic cardiomyopathy syndrome. Hum Pathol 1988;19:1361–1362. 401. Grech V, Chan MK, Vella C, Attard Montalto S, Rees P, Trompeter RS. Cardiac malformations associated with the congenital nephrotic syndrome. Pediatr Nephrol 2000;14:1115–1117. 402. Ferrans VJ. Pathologic anatomy of the dilated cardiomyopathies. Am J Cardiol 1989;64:9C–11C. 403. Saffitz JE, Ferrans VJ, Rodriguez ER, Lewis FR, Roberts WC. Histiocytoid car-diomyopathy: a cause of sudden death in apparently healthy infants. Am J Car-diol 1983;52:215–217. 404. Kauffman SL, Chandra N, Peress NS, Rodriguez-Torres R. Idiopathic infantile cardiomyopathy with involvement of the conduction system. Am J Cardiol 1972;30:648–652. 405. Van Hare GF. Radiofrequency catheter ablation of cardiac arrhythmias in pedi-atric patients. Adv Pediatr 1994;41:83–109. 406. Shehata BM, Cundiff CA, Lee K, et al. Exome sequencing of patients with his-tiocytoid cardiomyopathy reveals a de novo NDUFB11 mutation that plays a role in the pathogenesis of histiocytoid cardiomyopathy. Am J Med Genet A 2015;167A:2114–2121. 407. Falk RH, Alexander KM, Liao R, Dorbala S. AL (light-chain) cardiac amyloid-osis: a review of diagnosis and therapy. J Am Coll Cardiol 2016;68:1323–1341. 408. Gertz MA, Benson MD, Dyck PJ, et al. Diagnosis, prognosis, and therapy of transthyretin amyloidosis. J Am Coll Cardiol 2015;66:2451–2466. 409. Rocken C, Peters B, Juenemann G, et al. Atrial amyloidosis: an arrhythmogenic substrate for persistent atrial fibrillation. Circulation 2002;106:2091–2097. 410. Park J, Lee SH, Lee JS, et al. High recurrence of atrial fibrillation in patients with high tissue atrial natriuretic peptide and amyloid levels after concomitant maze and mitral valve surgery. J Cardiol 2017;69:345–352. 411. Grogan M, Dispenzieri A. Natural history and therapy of AL cardiac amyloid-osis. Heart Fail Rev 2015;20:155–162. 412. Adams D, Gonzalez-Duarte A, O’Riordan WD, et al. Patisiran, an RNAi thera-peutic, for hereditary transthyretin amyloidosis. N Engl J Med 2018;379:11–21. 413. Benson MD, Waddington-Cruz M, Berk JL, et al. Inotersen treatment for pa-tients with hereditary transthyretin amyloidosis. N Engl J Med 2018;379:22–31. 414. Buxbaum JN. Oligonucleotide drugs for transthyretin amyloidosis. N Engl J Med 2018;379:82–85. 415. Maurer MS, Sultan MB, Rapezzi C. Tafamidis for transthyretin amyloid cardio-myopathy. N Engl J Med 2019;380:196–197. 416. Maurer MS, Schwartz JH, Gundapaneni B, et al. Tafamidis treatment for patients with transthyretin amyloid cardiomyopathy. N Engl J Med 2018;379:1007–1016. 417. Mueller PS, Edwards WD, Gertz MA. Symptomatic ischemic heart disease re-sulting from obstructive intramural coronary amyloidosis. Am J Med 2000; 109:181–188. 418. Reisinger J, Dubrey SW, Lavalley M, Skinner M, Falk RH. Electrophysiologic abnormalities in AL (primary) amyloidosis with cardiac involvement. J Am Coll Cardiol 1997;30:1046–1051. 419. Mathew V, Chaliki H, Nishimura RA. Atrioventricular sequential pacing in car-diac amyloidosis: an acute Doppler echocardiographic and catheterization he-modynamic study. Clin Cardiol 1997;20:723–725. 420. Mathew V, Olson LJ, Gertz MA, Hayes DL. Symptomatic conduction system disease in cardiac amyloidosis. Am J Cardiol 1997;80:1491–1492. 421. Rezk T, Whelan CJ, Lachmann HJ, et al. Role of implantable intracardiac defi-brillators in patients with cardiac immunoglobulin light chain amyloidosis. Br J Haematol 2018;182:145–148. 422. Mohammed SF, Mirzoyev SA, Edwards WD, et al. Left ventricular amyloid deposition in patients with heart failure and preserved ejection fraction. JACC Heart Fail 2014;2:113–122. 423. Li JP, Galvis ML, Gong F, et al. In vivo fragmentation of heparan sulfate by hep-aranase overexpression renders mice resistant to amyloid protein A amyloidosis. Proc Natl Acad Sci U S A 2005;102:6473–6477. 424. Penchala SC, Connelly S, Wang Y, et al. AG10 inhibits amyloidogenesis and cellular toxicity of the familial amyloid cardiomyopathy-associated V122I trans-thyretin. Proc Natl Acad Sci U S A 2013;110:9992–9997. 425. Ton VK, Mukherjee M, Judge DP. Transthyretin cardiac amyloidosis: pathogen-esis, treatments, and emerging role in heart failure with preserved ejection frac-tion. Clin Med Insights Cardiol 2014;8:39–44. 426. Eriksson A, Eriksson P, Olofsson BO, Thornell LE. The sinoatrial node in famil-ial amyloidosis with polyneuropathy. A clinico-pathological study of nine cases from northern Sweden. Virchows Arch A Pathol Anat Histopathol 1984; 402:239–246. 427. Eriksson P, Olofsson BO. Pacemaker treatment in familial amyloidosis with pol-yneuropathy. Pacing Clin Electrophysiol 1984;7:702–706. 428. Olofsson BO, Eriksson P, Eriksson A. The sick sinus syndrome in familial amyloidosis with polyneuropathy. Int J Cardiol 1983;4:71–73. 429. Barbhaiya CR, Kumar S, Baldinger SH, et al. Electrophysiologic assessment of conduction abnormalities and atrial arrhythmias associated with amyloid cardio-myopathy. Heart Rhythm 2016;13:383–390. 430. Capone R, Amsterdam EA, Mason DT, Zelis R. Systemic amyloidosis, func-tional coronary insufficiency, and autonomic impairment. Ann Intern Med 1972;76:599–603. 431. French JM, Hall G, Parish DJ, Smith WT. Peripheral and autonomic nerve involvement in primary amyloidosis associated with uncontrollable diarrhoea and steatorrhoea. Am J Med 1965;39:277–284. 432. Wang AK, Fealey RD, Gehrking TL, Low PA. Patterns of neuropathy and autonomic failure in patients with amyloidosis. Mayo Clin Proc 2008; 83:1226–1230. 433. Gertz MA, Falk RH, Skinner M, Cohen AS, Kyle RA. Worsening of congestive heart failure in amyloid heart disease treated by calcium channel-blocking agents. Am J Cardiol 1985;55:1645. 434. Pollak A, Falk RH. Left ventricular systolic dysfunction precipitated by verap-amil in cardiac amyloidosis. Chest 1993;104:618–620. 435. Griffiths BE, Hughes P, Dowdle R, Stephens MR. Cardiac amyloidosis with asymmetrical septal hypertrophy and deterioration after nifedipine. Thorax 1982;37:711–712. 436. Tan NY, Mohsin Y, Hodge DO, et al. Catheter ablation for atrial arrhythmias in patients with cardiac amyloidosis. J Cardiovasc Electrophysiol 2016; 27:1167–1173. 437. Dubrey SW, Cha K, Anderson J, et al. The clinical features of immunoglobulin light-chain (AL) amyloidosis with heart involvement. QJM 1998;91:141–157. 438. Hamon D, Algalarrondo V, Gandjbakhch E, et al. Outcome and incidence of appropriate implantable cardioverter-defibrillator therapy in patients with car-diac amyloidosis. Int J Cardiol 2016;222:562–568. 439. Kristen AV, Dengler TJ, Hegenbart U, et al. Prophylactic implantation of cardioverter-defibrillator in patients with severe cardiac amyloidosis and high risk for sudden cardiac death. Heart Rhythm 2008;5:235–240. 440. Lin G, Dispenzieri A, Brady PA. Successful termination of a ventricular arrhythmia by implantable cardioverter defibrillator therapy in a patient with car-diac amyloidosis: insight into mechanisms of sudden death. Eur Heart J 2010; 31:1538. 441. Patel KS, Hawkins PN, Whelan CJ, Gillmore JD. Life-saving implantable cardi-overter defibrillator therapy in cardiac AL amyloidosis. BMJ Case Rep 2014; 2014. 442. Varr BC, Zarafshar S, Coakley T, et al. Implantable cardioverter-defibrillator placement in patients with cardiac amyloidosis. Heart Rhythm 2014; 11:158–162. 443. Feng D, Edwards WD, Oh JK, et al. Intracardiac thrombosis and embolism in patients with cardiac amyloidosis. Circulation 2007;116:2420–2426. 444. Zubkov AY, Rabinstein AA, Dispenzieri A, Wijdicks EF. Primary systemic amyloidosis with ischemic stroke as a presenting complication. Neurology 2007;69:1136–1141. Towbin et al Evaluation, Risk Stratification, and Management of ACM e363 445. Sayed RH, Rogers D, Khan F, et al. A study of implanted cardiac rhythm re-corders in advanced cardiac AL amyloidosis. Eur Heart J 2015;36:1098–1105. 446. Muchtar E, Gertz MA, Kumar SK, et al. Digoxin use in systemic light-chain (AL) amyloidosis: contra-indicated or cautious use? Amyloid 2018;25:86–92. 447. Tukkie R, Sogaard P, Vleugels J, de Groot IK, Wilde AA, Tan HL. Delay in right ventricular activation contributes to Brugada syndrome. Circulation 2004; 109:1272–1277. 448. Papavassiliu T, Veltmann C, Doesch C, et al. Spontaneous type 1 electrocardio-graphic pattern is associated with cardiovascular magnetic resonance imaging changes in Brugada syndrome. Heart Rhythm 2010;7:1790–1796. 449. Catalano O, Antonaci S, Moro G, et al. Magnetic resonance investigations in Brugada syndrome reveal unexpectedly high rate of structural abnormalities. Eur Heart J 2009;30:2241–2248. 450. van Hoorn F, Campian ME, Spijkerboer A, et al. SCN5A mutations in Brugada syndrome are associated with increased cardiac dimensions and reduced contrac-tility. PLoS One 2012;7:e42037. 451. Nademanee K, Veerakul G, Chandanamattha P, et al. Prevention of ventricular fibrillation episodes in Brugada syndrome by catheter ablation over the anterior right ventricular outflow tract epicardium. Circulation 2011;123:1270–1279. 452. Nademanee K, Raju H, de Noronha SV, et al. Fibrosis, connexin-43, and con-duction abnormalities in the Brugada syndrome. J Am Coll Cardiol 2015; 66:1976–1986. 453. Ohkubo K, Watanabe I, Okumura Y, et al. Right ventricular histological sub-strate and conduction delay in patients with Brugada syndrome. Int Heart J 2010;51:17–23. 454. Zumhagen S, Spieker T, Rolinck J, et al. Absence of pathognomonic or inflam-matory patterns in cardiac biopsies from patients with Brugada syndrome. Circ Arrhythm Electrophysiol 2009;2:16–23. 455. Frustaci A, Priori SG, Pieroni M, et al. Cardiac histological substrate in patients with clinical phenotype of Brugada syndrome. Circulation 2005; 112:3680–3687. 456. Corrado D, Zorzi A, Cerrone M, et al. Relationship between arrhythmogenic right ventricular cardiomyopathy and Brugada syndrome: new insights from mo-lecular biology and clinical implications. Circ Arrhythm Electrophysiol 2016; 9:e003631. 457. Xiong Q, Cao Q, Zhou Q, et al. Arrhythmogenic cardiomyopathy in a patient with a rare loss-of-function KCNQ1 mutation. J Am Heart Assoc 2015; 4:e001526. 458. Schmitt N, Schwarz M, Peretz A, Abitbol I, Attali B, Pongs O. A recessive C-terminal Jervell and Lange-Nielsen mutation of the KCNQ1 channel impairs subunit assembly. EMBO J 2000;19:332–340. 459. Chen YH, Xu SJ, Bendahhou S, et al. KCNQ1 gain-of-function mutation in fa-milial atrial fibrillation. Science 2003;299:251–254. 460. Barhanin J, Lesage F, Guillemare E, Fink M, Lazdunski M, Romey G. K(V) LQT1 and lsK (minK) proteins associate to form the I(Ks) cardiac potassium cur-rent. Nature 1996;384:78–80. 461. Bellocq C, van Ginneken AC, Bezzina CR, et al. Mutation in the KCNQ1 gene leading to the short QT-interval syndrome. Circulation 2004;109:2394–2397. 462. Bartos DC, Anderson JB, Bastiaenen R, et al. A KCNQ1 mutation causes a high penetrance for familial atrial fibrillation. J Cardiovasc Electrophysiol 2013; 24:562–569. 463. Das S, Makino S, Melman YF, et al. Mutation in the S3 segment of KCNQ1 re-sults in familial lone atrial fibrillation. Heart Rhythm 2009;6:1146–1153. 464. Bagnall RD, Das KJ, Duflou J, Semsarian C. Exome analysis-based molecular autopsy in cases of sudden unexplained death in the young. Heart Rhythm 2014;11:655–662. 465. Kharbanda M, Hunter A, Tennant S, et al. Long QT syndrome and left ventric-ular noncompaction in 4 family members across 2 generations with KCNQ1 mu-tation. Eur J Med Genet 2017;60:233–238. 466. Nakashima K, Kusakawa I, Yamamoto T, et al. A left ventricular noncompaction in a patient with long QT syndrome caused by a KCNQ1 mutation: a case report. Heart Vessels 2013;28:126–129. 467. Ogawa K, Nakamura Y, Terano K, Ando T, Hishitani T, Hoshino K. Isolated non-compaction of the ventricular myocardium associated with long QT syn-drome: a report of 2 cases. Circ J 2009;73:2169–2172. 468. Clapham DE, Julius D, Montell C, Schultz G. International Union of Pharma-cology. XLIX. Nomenclature and structure-function relationships of transient re-ceptor potential channels. Pharmacol Rev 2005;57:427–450. 469. Ramsey IS, Delling M, Clapham DE. An introduction to TRP channels. Annu Rev Physiol 2006;68:619–647. 470. Abriel H, Syam N, Sottas V, Amarouch MY, Rougier JS. TRPM4 channels in the cardiovascular system: physiology, pathophysiology, and pharmacology. Biochem Pharmacol 2012;84:873–881. 471. Murakami M, Xu F, Miyoshi I, Sato E, Ono K, Iijima T. Identification and char-acterization of the murine TRPM4 channel. Biochem Biophys Res Commun 2003;307:522–528. 472. Nilius B, Prenen J, Voets T, Droogmans G. Intracellular nucleotides and poly-amines inhibit the Ca21-activated cation channel TRPM4b. Pflugers Arch 2004;448:70–75. 473. Kruse M, Schulze-Bahr E, Corfield V, et al. Impaired endocytosis of the ion channel TRPM4 is associated with human progressive familial heart block type I. J Clin Invest 2009;119:2737–2744. 474. Liu H, El Zein L, Kruse M, et al. Gain-of-function mutations in TRPM4 cause autosomal dominant isolated cardiac conduction disease. Circ Cardiovasc Genet 2010;3:374–385. 475. Stallmeyer B, Zumhagen S, Denjoy I, et al. Mutational spectrum in the Ca(21)– activated cation channel gene TRPM4 in patients with cardiac conductance dis-turbances. Hum Mutat 2012;33:109–117. 476. Liu H, Chatel S, Simard C, et al. Molecular genetics and functional anomalies in a series of 248 Brugada cases with 11 mutations in the TRPM4 channel. PLoS One 2013;8:e54131. 477. Daumy X, Amarouch MY, Lindenbaum P, et al. Targeted resequencing iden-tifies TRPM4 as a major gene predisposing to progressive familial heart block type I. Int J Cardiol 2016;207:349–358. 478. Forleo C, D’Erchia AM, Sorrentino S, et al. Targeted next-generation sequencing detects novel gene-phenotype associations and expands the muta-tional spectrum in cardiomyopathies. PLoS One 2017;12:e0181842. 479. Saito Y, Nakamura K, Nishi N, et al. TRPM4 Mutation in Patients With Ventric-ular Noncompaction and Cardiac Conduction Disease. Circ Genom Precis Med 2018;11:e002103. 480. MacLennan DH, Kranias EG. Phospholamban: a crucial regulator of cardiac contractility. Nat Rev Mol Cell Biol 2003;4:566–577. 481. Haghighi K, Kolokathis F, Gramolini AO, et al. A mutation in the human phos-pholamban gene, deleting arginine 14, results in lethal, hereditary cardiomyop-athy. Proc Natl Acad Sci U S A 2006;103:1388–1393. 482. Posch MG, Perrot A, Geier C, et al. Genetic deletion of arginine 14 in phospho-lamban causes dilated cardiomyopathy with attenuated electrocardiographic R amplitudes. Heart Rhythm 2009;6:480–486. 483. Groeneweg JA, van der Zwaag PA, Jongbloed JD, et al. Left-dominant arrhyth-mogenic cardiomyopathy in a large family: associated desmosomal or nondes-mosomal genotype? Heart Rhythm 2013;10:548–559. 484. Medeiros A, Biagi DG, Sobreira TJ, et al. Mutations in the human phospholam-ban gene in patients with heart failure. Am Heart J 2011;162:1088–1095.e1. 485. Sepehrkhouy S, Gho J, van Es R, et al. Distinct fibrosis pattern in desmosomal and phospholamban mutation carriers in hereditary cardiomyopathies. Heart Rhythm 2017;14:1024–1032. 486. Basso C, Thiene G, Corrado D, Angelini A, Nava A, Valente M. Arrhythmo-genic right ventricular cardiomyopathy. Dysplasia, dystrophy, or myocarditis? Circulation 1996;94:983–991. 487. Pilichou K, Bezzina CR, Thiene G, Basso C. Arrhythmogenic cardiomyopathy: transgenic animal models provide novel insights into disease pathobiology. Circ Cardiovasc Genet 2011;4:318–326. 488. Mast TP, Teske AJ, vd Heijden JF, et al. Left ventricular involvement in arrhyth-mogenic right ventricular dysplasia/cardiomyopathy assessed by echocardiogra-phy predicts adverse clinical outcome. J Am Soc Echocardiogr 2015; 28:1103–11013.e9. 489. Te Riele AS, James CA, Philips B, et al. Mutation-positive arrhythmogenic right ventricular dysplasia/cardiomyopathy: the triangle of dysplasia displaced. J Car-diovasc Electrophysiol 2013;24:1311–1320. 490. Sen-Chowdhry S, Syrris P, Prasad SK, et al. Left-dominant arrhythmogenic car-diomyopathy: an under-recognized clinical entity. J Am Coll Cardiol 2008; 52:2175–2187. 491. Gho JM, van Es R, Stathonikos N, et al. High resolution systematic digital his-tological quantification of cardiac fibrosis and adipose tissue in phospholamban p.Arg14del mutation associated cardiomyopathy. PLoS One 2014;9:e94820. 492. Te Rijdt WP, van Tintelen JP, Vink A, et al. Phospholamban p.Arg14del cardio-myopathy is characterized by phospholamban aggregates, aggresomes, and au-tophagic degradation. Histopathology 2016;69:542–550. 493. Groeneweg JA, van der Zwaag PA, Olde Nordkamp LR, et al. Arrhythmogenic right ventricular dysplasia/cardiomyopathy according to revised 2010 task force criteria with inclusion of non-desmosomal phospholamban mutation carriers. Am J Cardiol 2013;112:1197–1206. 494. La Gerche A, Heidbuchel H, Burns AT, et al. Disproportionate exercise load and remodeling of the athlete’s right ventricle. Med Sci Sports Exerc 2011;43:974–981. 495. Bartram U, Bauer J, Schranz D. Primary noncompaction of the ventricular myocar-dium from the morphogenetic standpoint. Pediatr Cardiol 2007;28:325–332. e364 Heart Rhythm, Vol 16, No 11, November 2019 496. McLaughlin HM, Funke BH. Molecular testing in inherited cardiomyopathies. In: Coleman WB, Tsongalis GJ, eds. Diagnostic Molecular Pathology. Cambridge, MA: Academic Press; 2017. p. 213–220. 497. Petersen SE, Selvanayagam JB, Wiesmann F, et al. Left ventricular non-compaction: insights from cardiovascular magnetic resonance imaging. J Am Coll Cardiol 2005;46:101–105. 498. Finsterer J, Stollberger C, Towbin JA. Left ventricular noncompaction cardio-myopathy: cardiac, neuromuscular, and genetic factors. Nat Rev Cardiol 2017;14:224–237. 499. Towbin JA, Lorts A, Jefferies JL. Left ventricular non-compaction cardiomyop-athy. Lancet 2015;386:813–825. 500. Towbin JA, Jefferies JL. Cardiomyopathies due to left ventricular noncompac-tion, mitochondrial and storage diseases, and inborn errors of metabolism. Circ Res 2017;121:838–854. 501. Towbin JA. Left ventricular noncompaction: a new form of heart failure. Heart Fail Clin 2010;6:453–469. viii. 502. Kawel N, Nacif M, Arai AE, et al. Trabeculated (noncompacted) and compact myocardium in adults: the multi-ethnic study of atherosclerosis. Circ Cardiovasc Imaging 2012;5:357–366. 503. Weir-McCall JR, Yeap PM, Papagiorcopulo C, et al. Left ventricular noncom-paction: anatomical phenotype or distinct cardiomyopathy? J Am Coll Cardiol 2016;68:2157–2165. 504. Pignatelli RH, McMahon CJ, Dreyer WJ, et al. Clinical characterization of left ventricular noncompaction in children: a relatively common form of cardiomy-opathy. Circulation 2003;108:2672–2678. 505. Hoedemaekers YM, Caliskan K, Michels M, et al. The importance of ge-netic counseling, DNA diagnostics, and cardiologic family screening in left ventricular noncompaction cardiomyopathy. Circ Cardiovasc Genet 2010;3:232–239. 506. Rhee JW, Grove ME, Ashley EA. Navigating genetic and phenotypic uncer-tainty in left ventricular noncompaction. Circ Cardiovasc Genet 2017; 10:e001857. 507. Miszalski-Jamka K, Jefferies JL, Mazur W, et al. Novel genetic triggers and genotype-phenotype correlations in patients with left ventricular noncompac-tion. Circ Cardiovasc Genet 2017;10:e001763. 508. Bainbridge MN, Davis EE, Choi WY, et al. Loss of function mutations in NNT are associated with left ventricular noncompaction. Circ Cardiovasc Genet 2015; 8:544–552. 509. Caliskan K, Szili-Torok T, Theuns DA, et al. Indications and outcome of implantable cardioverter-defibrillators for primary and secondary prophylaxis in patients with noncompaction cardiomyopathy. J Cardiovasc Electrophysiol 2011;22:898–904. 510. Brescia ST, Rossano JW, Pignatelli R, et al. Mortality and sudden death in pe-diatric left ventricular noncompaction in a tertiary referral center. Circulation 2013;127:2202–2208. 511. Andreini D, Pontone G, Bogaert J, et al. Long-term prognostic value of cardiac magnetic resonance in left ventricle noncompaction: a prospective multicenter study. J Am Coll Cardiol 2016;68:2166–2181. 512. Steffel J, Duru F. Rhythm disorders in isolated left ventricular noncompaction. Ann Med 2012;44:101–108. 513. Bhatia NL, Tajik AJ, Wilansky S, Steidley DE, Mookadam F. Isolated noncom-paction of the left ventricular myocardium in adults: a systematic overview. J Card Fail 2011;17:771–778. 514. Muser D, Liang JJ, Witschey WR, et al. Ventricular arrhythmias associated with left ventricular noncompaction: electrophysiologic characteristics, mapping, and ablation. Heart Rhythm 2017;14:166–175. 515. Gleva MJ, Wang Y, Curtis JP, Berul CI, Huddleston CB, Poole JE. Complica-tions associated with implantable cardioverter defibrillators in adults with congenital heart disease or left ventricular noncompaction cardiomyopathy (From the NCDR((R)) Implantable Cardioverter-Defibrillator Registry). Am J Cardiol 2017;120:1891–1898. 516. Stollberger C, Blazek G, Dobias C, Hanafin A, Wegner C, Finsterer J. Frequency of stroke and embolism in left ventricular hypertrabeculation/noncompaction. Am J Cardiol 2011;108:1021–1023. 517. Gage BF, Waterman AD, Shannon W, Boechler M, Rich MW, Radford MJ. Validation of clinical classification schemes for predicting stroke: results from the National Registry of Atrial Fibrillation. JAMA 2001;285:2864–2870. 518. Gati S, Papadakis M, Papamichael ND, Zaidi A, Sheikh N, Reed M, Sharma R, Thilaganathan B, Sharma S. Reversible de novo left ventricular trabeculations in pregnant women: implications for the diagnosis of left ventricular noncompac-tion in low-risk populations. Circulation 2014;130:475–483. 519. Ivanov A, Dabiesingh DS, Bhumireddy GP, et al. Prevalence and prognostic sig-nificance of left ventricular noncompaction in patients referred for cardiac mag-netic resonance imaging. Circ Cardiovasc Imaging 2017;10:e006174. 520. Sidhu MS, Uthamalingam S, Ahmed W, et al. Defining left ventricular non-compaction using cardiac computed tomography. J Thorac Imaging 2014; 29:60–66. 521. Kawel-Boehm N, McClelland RL, Zemrak F, et al. Hypertrabeculated left ven-tricular myocardium in relationship to myocardial function and fibrosis: the Multi-Ethnic Study of Atherosclerosis. Radiology 2017;284:667–675. 522. Chin TK, Perloff JK, Williams RG, Jue K, Mohrmann R. Isolated noncompac-tion of left ventricular myocardium. A study of eight cases. Circulation 1990; 82:507–513. 523. Jenni R, Oechslin E, Schneider J, Attenhofer Jost C, Kaufmann PA. Echocardio-graphic and pathoanatomical characteristics of isolated left ventricular non-compaction: a step towards classification as a distinct cardiomyopathy. Heart 2001;86:666–671. 524. Thuny F, Jacquier A, Jop B, et al. Assessment of left ventricular non-compaction in adults: side-by-side comparison of cardiac magnetic resonance imaging with echocardiography. Arch Cardiovasc Dis 2010;103:150–159. Towbin et al Evaluation, Risk Stratification, and Management of ACM e365 Appendix 1 Author disclosure table Writing group member Employment Honoraria/ Speaking/ Consulting Speakers’ bureau Research Fellowship support Ownership/ Partnership/ Principal/ Majority stockholder Stock or stock options Intellectual property/ Royalties Other Jeffrey A. Towbin, MS, MD (Chair) Le Bonheur Children’s Hospital, Memphis, Tennessee; University of Tennessee Health Science Center, Memphis, Tennessee None None None None None None None None William J. McKenna, MD, DSc (Vice-Chair) University College London, Institute of Cardiovascular Science, London, United Kingdom None None None None None None None None Dominic J. Abrams, MD, MRCP, MBA Boston Children’s Hospital, Boston, Massachusetts 1: Audentes Therapeutics None None None None None None None Michael J. Ackerman, MD, PhD Mayo Clinic, Rochester, Minnesota 0: Abbott; 0: Audentes Therapeutics; 0: Boston Scientific; 0: Gilead Sciences; 0: MyoKardia; 1: Invitae; 1: Medtronic None 5: NIH None None None 0: AliveCor; 0: Blue Ox Healthcare; 0: StemoniX None Hugh Calkins, MD, FHRS, CCDS Johns Hopkins University, Baltimore, Maryland 1: Abbott; 1: Biosense Webster; 1: Boston Scientific; 1: Sanofi Aventis; 1: Toray Industries; 2: Medtronic; 3: Boehringer Ingelheim None 2: Boston Scientific None None None None None Francisco C.C. Darrieux, MD, PhD Universidade de S~ ao Paulo, Instituto do Corac ¸~ ao HCFMUSP, S~ ao Paulo, Brazil 1: Bayer; 1: Boehringer Ingelheim; 1: Daiichi-Sankyo; 1: Pfizer None None None None None None None James P. Daubert, MD, FHRS Duke University Medical Center, Durham, North Carolina 1: Abbott; 1: ACC; 1: Biosense Webster; 1: Boston Scientific; 1: Iowa Approach; 1: LivaNova; 1: VytronUS; 1: ZOLL Medical Corporation; 2: Gilead Sciences; 2: Medtronic None 0: Abbott; 0: Biosense Webster; 0: Boston Scientific; 0: Medtronic 3: Biosense Webster; 3: Boston Scientific; 3: Medtronic None None None None Christian de Chillou, MD, PhD Nancy University Hospital, Vandoeuvre-l es-Nancy, France 1: Abbott; 1: Biosense Webster; 1: Boston Scientific; 1: Medtronic None None None None None None None e366 Heart Rhythm, Vol 16, No 11, November 2019 Eugene C. DePasquale, MD University of California, Los Angeles, Los Angeles, California None None None None None None None None Milind Y. Desai, MD Cleveland Clinic, Cleveland, Ohio None None None None None None None None N.A. Mark Estes, III, MD, FHRS, CCDS University of Pittsburgh Medical Center, Pittsburgh, Pennsylvania 1: Abbott; 1: Boston Scientific; 1: Medtronic None None None None None None None Wei Hua, MD, FHRS Fu Wai Hospital, Beijing, China None None None None None None None None Julia H. Indik, MD, PhD, FHRS University of Arizona, Sarver Heart Center, Tucson, Arizona None None None None None None None 2: ACC Jodie Ingles, MPH, PhD, FHRS Agnes Ginges Centre for Molecular Cardiology at Centenary Institute, The University of Sydney, Sydney, Australia None None None None None None None None Cynthia A. James, ScM, PhD, CGC Johns Hopkins University, Baltimore, Maryland 1: Abbott None 1: NSGC; 2: Boston Scientific None None None None 0: NSGC Roy M. John, MBBS, PhD, CCDS, FHRS Vanderbilt University Medical Center, Nashville, Tennessee 1: Abbott; Medtronic None None None None None None None Daniel P. Judge, MD Medical University of South Carolina, Charleston, South Carolina 1: 4D Molecular Therapeutics; 1: Blade Therapeutics; 1: GlaxoSmithKline; 2: Pfizer None 2: NIH; 1: Array BioPharma; 2: Eidos Biopharma None None None None None Roberto Keegan, MD Hospital Privado Del Sur, Buenos Aires, Argentina, and Hospital Espa~ nol, Bahia Blanca, Argentina None None None None None None None None Andrew D. Krahn, MD, FHRS The University of British Columbia, Vancouver, Canada None None None None None None None None Mark S. Link, MD, FHRS UT Southwestern Medical Center, Dallas, Texas None None None None None None None None Frank I. Marcus, MD University of Arizona, Sarver Heart Center, Tucson, Arizona None None None None None None None None Christopher J. McLeod, MBChB, PhD, FHRS Mayo Clinic, Rochester, Minnesota None None None None None None None None Luisa Mestroni, MD University of Colorado Anschutz Medical Campus, Aurora, Colorado 1: MyoKardia None 4: AHA; 4: NIH; 5: Fondation Leducq None None None None None (Continued) Towbin et al Evaluation, Risk Stratification, and Management of ACM e367 Appendix 1 (Continued)Author disclosure table (continued) Writing group member Employment Honoraria/ Speaking/ Consulting Speakers’ bureau Research Fellowship support Ownership/ Partnership/ Principal/ Majority stockholder Stock or stock options Intellectual property/ Royalties Other Silvia G. Priori, MD, PhD University of Pavia, Pavia, Italy and European Reference Network for Rare and Low Prevalence Complex Diseases of the Heart; ICS Maugeri, IRCCS, Pavia, Italy None None 2: Cardurion None None None None None Jeffrey E. Saffitz, MD, PhD Beth Israel Deaconess Medical Center, Boston, Massachusetts None None None None None None None None Shubhayan Sanatani, MD, FHRS, CCDS Children’s Heart Center, Vancouver, Canada None None None None None None None None Wataru Shimizu, MD, PhD Department of Cardiovascular Medicine, Nippon Medical School, Tokyo, Japan None None None None None None None None J. Peter van Tintelen, MD, PhD Utrecht University Medical Center Utrecht, University of Utrecht, Department of Genetics, Utrecht, the Netherlands; University of Amsterdam Academic Medical Center, Amsterdam, the Netherlands None None None None None None None None Arthur A.M. Wilde, MD, PhD University of Amsterdam, Academic Medical Center, Amsterdam, the Netherlands; Department of Medicine, Columbia University Irving Medical Center, New York, New York; European Reference Network for Rare and Low Prevalence Complex Diseases of the Heart None None None None None None None None e368 Heart Rhythm, Vol 16, No 11, November 2019 Wojciech Zareba, MD, PhD University of Rochester, Rochester, New York None None 5: BIOTRONIK; 5: EBR Systems; 5: Gilead Sciences; 5: LivaNova None None None None None Number value: 0 5 $0; 1 5 $10,000; 2 5 .$10,000 to $25,000; 3 5 .$25,000 to $50,000; 4 5 .$50,000 to $100,000; 5 5 .$100,000. ACC 5 American College of Cardiology; AHA 5 American Heart Association; NIH 5 National Institutes of Health; NSGC 5 National Society of Genetic Counselors. Research and fellowship support are classed as programmatic support. Sources of programmatic support are disclosed but are not regarded as a relevant relationship with industry for writing group members or re-viewers. Towbin et al Evaluation, Risk Stratification, and Management of ACM e369 Appendix 2 Peer reviewer disclosure table Peer reviewer Employment Honoraria/ Speaking/ Consulting Speakers’ bureau Research Fellowship support Ownership/ Partnership/ Principal/ Majority stockholder Stock or stock options Intellectual property/ Royalties Other Peter Aziz, MD Cleveland Clinic, Cleveland, Ohio None None None None None None None None Mina K. Chung, MD, FHRS Cleveland Clinic, Cleveland, Ohio 2: ABIM None 5: AHA; 5: NIH None None 1: Elsevier; 1: UpToDate 0: ACC (EP Section Leadership Council member); 0: AHA (Chair, ECG & Arrhythmias Committee; Member, Clinical Cardiology Leadership Committee; Member, Committee on Scientific Sessions Programming); 0: Amarin (Data monitoring committee member); 0: BIOTRONIK; 2: AHA (Associate Editor, Circulation Arrhythmia and Electrophysiology) Shriprasad Deshpande, MBBS, MS Children’s National, Washington, DC None None None None None None None None Susan Etheridge, MD, FACC University of Utah, Salt Lake City, Utah 1: UpToDate None 0: NIH None None None None 0: Sudden Arrhythmia Death Foundation Marcio Jansen de Oliveira Figueiredo, MD University of Campinas, Campinas, S~ ao Paulo, Brazil 1: Boehringer Ingelheim; 1: Daiichi-Sankyo None None None None None None None John Gorcsan III, MD, FASE Washington University School of Medicine, St. Louis, Missouri 1: EBR systems; 1: V-wave, Inc. None 2: V-wave Inc.; 2: EBR Systems None None None None None Denise Tessariol Hachul, MD Heart Institute, University of S~ ao Paulo, S~ ao Paulo, Brazil None None None None None None None None Robert Hamilton, MD The Hospital for Sick Children, Toronto, Ontario None None None None None None None None e370 Heart Rhythm, Vol 16, No 11, November 2019 Richard Hauer, MD ICIN-Netherlands Heart Institute, Utrecht, the Netherlands None None None None None None None None Minoru Horie, MD, PhD Shiga University of Medical Sciences, Shiga, Japan None None None None None None None None Yuki Iwasaki, MD, PhD Nippon Medical School, Tokyo, Japan None None None None None None None None Rajesh Janardhanan, MD, MRCP, FACC, FASE University of Arizona College of Medicine, Tucson, Arizona None None None None None None None None Neal Lakdawala, MD Brigham and Women’s Hospital, Boston, Massachusetts 1: Array Biopharma; 1: MyoKardia None None None None None None None Andrew P. Landstrom, MD, PhD Duke University School of Medicine, Durham, North Carolina None None None None None None None None Andrew Martin, MBChB, CCDS Green Lane Cardiovascular Service, Auckland, New Zealand None None None None None None None None Ana Morales, MS The Ohio State University, Columbus, Ohio 1: NSGC None 4: NIH None None None None None Brittney Murray, MS Johns Hopkins Hospital, Baltimore, Maryland 1: Clear Genetics; 1: My Gene Counsel; 1: PWN Health None None None None None None None Santiago Nava Townsend, MD Departamento de Electrofisiología Cardiaca, Instituto Nacional de Cardiología Ignacio Ch avez, Mexico City, Mexico 1: Cook Medical; 2: CORDIS–Johnson & Johnson None None None None None None None Stuart Dean Russell, MD Duke University School of Medicine, Durham, North Carolina 1: Medtronic None 0: Abbott Laboratories; 0: SubQ Pharmaceuticals None None None None None Frederic Sacher, MD, PhD LIRYC Institute/ Bordeaux University, Pessac, France 1: Abbott Laboratories; 1: Bayer; 1: Biosense Webster; 1: Boehringer Ingelheim; 1: Boston Scientific; 1: LivaNova; 1: Medtronic; 1: Pfizer None None None None None None None Mauricio Scanavacca, MD Instituto do Corac ¸~ ao, S~ ao Paulo, Brazil None None None None None None None None (Continued) Towbin et al Evaluation, Risk Stratification, and Management of ACM e371 Appendix 2 (Continued)Peer reviewer disclosure table (continued) Peer reviewer Employment Honoraria/ Speaking/ Consulting Speakers’ bureau Research Fellowship support Ownership/ Partnership/ Principal/ Majority stockholder Stock or stock options Intellectual property/ Royalties Other Kavita Sharma, MD Johns Hopkins University, Baltimore, Maryland 1: Novartis Pharmaceuticals Corporation None 3: NIH; 3: AHA None None None None None Yoshihide Takahashi, MD Tokyo Medical and Dental University, Tokyo, Japan 1: Abbott; 1: Biosense Webster; 1: BIOTRONIK; 1: Japan Lifeline None None None None None None None Harikrishna Tandri, MBBS, MD Johns Hopkins University, Baltimore, Maryland 1: Abbott None None None None None None None Gaurav A. Upadhyay, MD, FACC University of Chicago Medicine, Chicago, Illinois 1: Abbott Laboratories; 1: BIOTRONIK; 1: CardioNet; 1: Medtronic; 1: ZOLL Medical Corporation None None None None None None None Christian Wolpert, MD University Hospital Mannheim, Ludwigsburg, Germany None None None None None None None None Number value: 0 5 $0; 1 5 $10,000; 2 5 .$10,000 to $25,000; 3 5 .$25,000 to $50,000; 4 5 .$50,000 to $100,000; 5 5 .$100,000. ABIM = American Board of Internal Medicine; ACC 5 American College of Cardiology; AHA 5 American Heart Association; NIH 5 National Institutes of Health; NSGC 5 National Society of Genetic Counselors. Research and fellowship support are classed as programmatic support. Sources of programmatic support are disclosed but are not regarded as a relevant relationship with industry for writing group members or reviewers. e372 Heart Rhythm, Vol 16, No 11, November 2019
189394
https://www.youtube.com/watch?v=vOz-NYTYUH0
Etymonline Tutorial for Educators Michelle Elia 40 subscribers 66 likes Description 1471 views Posted: 10 Mar 2024 Etymonline.com is a POWERFUL resource for learning the etymology and morphology of words if you know how to use it. Like our language, the website is complex and a bit daunting. However, once you learn to navigate this site, it can be a valuable tool for educators to build morphological awareness for students. Handouts for this presentation can be accessed via this link - Learn more about Michelle Elia here - 3 comments Transcript: Introduction hello welcome to this brief tutorial on identifying morphemes using Edom online my name is Michelle El liia I'm an assistant professor at Marietta College and today I'm going to guide you through this process let's start with why I created this tutorial we know that teaching morphology improves word recognition and we know that it improves our students vocabulary knowledge morphology is incredibly powerful and we know that when we teach morphological awareness or we build morphological awareness our students can apply that knowledge of morphology to new words that they've never that they encounter and they've never been taught but in order to build morphological awareness as teachers we need to know the mores that we're teaching to our students in the vocabulary and in the words that we're teaching daily so that brings me to the most frequently Asked question I receive how can I determine the mores and words and I usually tell teachers just refer to Edom online a fantastic free resource but it's also one that can be a little bit challenging especially if we don't have a whole lot of background knowledge on this Etymonline Tutorial topic so to Aid in this endeavor I created three sort of simple steps in order to navigate Edom online listen Edom online is not perfect or easy to use but neither is our incredibly complex language it is though a really fantastic tool once you get comfortable with it so step number one remove the affixes and I'm not saying all the affixes let's start with just the ones you know let's remove those so that we can get a slightly smaller word that which we referred to as a stem which is um a Mor fee or more FES that can kind of stand alone so then we're going to plug that stem into Edom online and last but not least we're going to look for the links and the plus signs let me me show you an Remove affixes example so step number one remove affixes and by that I'm not saying you're going to remove them all we're going to start with any aics that you might know the ones that you're really comfortable with to get us to a stem that can stand alone so let's think about this particular word perceiving if I wanted to plug this into Edom online I could plug it in just the way it is but there's going to be a couple extra clicks involved or I can remove that ining and be left with why would I do that well I already know that that ing suffix is a past tense verb suffix it's one I'm pretty familiar with so I don't need to plug it into Edom online to find out the meaning I already know what that morphine Remove inflectional suffixes means so I usually peel off those frequently used inflectional suffixes ing plural s e d e s plural those ones I'll kind of take off at the start before I plug any word into um edim online another helpful list comes to us from the fabulous research of Holly Lane and her colleagues in this fast um fantastic and fascinating article on the most frequently used High utility morphemes for instruction so if you know the meanings of any of these morphemes you could peel these one of these prefixes or suffixes off or simply just recognize their meaning upfront when you go to plug that word into Edom online so this is the top 10 list so to speak of prefixes and suffixes from this wonderful list of morphine frequency and academic words by Holly Lane at all so my next step is put that stem Language of origin into Edom online I'm going to plug it in and then I should see the language of origin and a brief definition of the word so once again with my word perceive I plugged it into Edom online and this is what I got I tend to focus on the language of origin because I'm always fascinated interested in that that and I speculated that perceive was probably French simply because I had that Soft C but now here we see that this is a word from old French um and that also means it's from Latin as well so we've kind of got that information little validation which is always good and then we can look and see the meaning of the word now of course I always back this up with things like dictionary.com and other tools to Pro provide the best student friendly definition then step three says to look Links and plus signs for the links and the plus sign I'm always looking for that plus sign so the red hyperlinks tell us where we might need to keep on clicking in order to find new information but the plus sign is where I can look and find those morphemes so Steps step three the last step look for the links and the plus signs so once again I'll go back to perceive and here I can see I've got a plus sign right here and per meaning thoroughly and there is a hyperlink to even click on to learn more about per that prefix per which means thoroughly I could click there but I kind of see it already defined for me um and then I've got my Latin root right compare to grasp take so I'm able to really learn right here most of the morphemes that I need I already knew the Ing and now I've got my prefix per and then I've got my root my and this is the Latin root so so obviously we're not spelling it this way in the particular word so I know that this Latin root was capar but I know that I'm really thinking of it in terms of the bass um that we typically Ed uh in English which is seeve but we know that that base means to grasp or take now fun fact that particular base seeve is what we call a twin base and its other version is accept practice my um writing with my mouse here on the screen there we go so C sep um we know actually our twin bases they both have the same meaning and once again I can go to Adam Edam online to kind of see that and learn a little bit more now to save myself some time I already plugged in the word the word was interception so if I go back to step one which was to peel off any a affixes I know um well I know that that IO n in interception means that act or state of so I'm going to peel it off and I'm going to go right down here to intercept and once again I see I'm going to look for I got the meaning of that word and then I'm going to look and see okay I've got enter which means between and there's my red hyperlink if I wanted to click on that and get more information about enter um which is that prefix that means between and then we have here my Latin root which is highly related to that it's a combining form so there's my Latin root and it is a combining form of the one we just saw in perceive so that is spelled though in our base as sept so now we could see that sept and seeve were those twin bases again to take or to catch so as we think of um the word interception and here I've got it on my dry erase board if I wanted to teach my students the morphemes within it well we know that inter is between we've got that IO n that act or state of which also turns things into nouns turns this word into a noun and then I've got my sept which we know also goes with seeve um which is to catch or to take can also be to grasp so we can now see those morphemes by following those steps as I mentioned they're not exactly that simple but they are pretty helpful okay let's try another one shall we I have some other words here I've got let's look at Constitution so I was going to take the word constitution because that's something that we know we're often teaching in our social studies class but I can pull off that IO n so I could have gone to Constitution here but then look at what it said go to constitute well so my step one is constitute is to take off that on which means the actor state of leaving me with constitute step two plug it into Edom online and here's what we get I see the definition and I see the language of origin that it is in fact Latin so there's my definition and there's my language of language of origin and then step three I'm going to look for those red hyperlinks and the plus signs so for constitute I've got Comm um and con and Comm are chameleon prefixes so meaning together and again if I wanted more information to know that I could just click on Comm takes me to that meaning with or together there's my prefix now I can go back so I've got together and then I've got this Latin root remember it is not the base that we typically see in our spelling but it is the root so I've got this Latin root here um and we that's that stute um and it means to set so to set with um is to constitute and the act of setting with is Constitution or to stand or make or be firm um and so we can kind of see and I always tell my students that it's not a direct connection always to those morphemes we've had a lot of time to change and adjust how we use them but you can help make the connections um between that and the common um today's meaning and use of that particular Review word so let's go back and review our three steps and I'll clear off all that stuff on my slides here the three steps sort of simple steps to using Edom online step one remove any prefixes or suffixes you already know you should be left with the stem step two plug that stem into Edom online and then you'll see the language of origin in a brief definition and then step three look for the red hyperlinks and the plus sign that's where you're really going to dig into the good stuff and learn about the meanings of those individual morphemes and remember if you want to make any adjustments um just maybe even consider the prefix prefixes or suffix as you already know um you don't necessarily have to peel them off but you won't have to dig deeper because you already know those so that's it that's the three simple steps for using Edom online sort of but I'm going to close with the best advice I've ever been given and this advice comes to us from our dearly departed friend William Van Cleave and I love this perhaps one of the best things about morphological study is that you don't have to be or act like an expert this is about exploring words with students helping them uncover meanings and deepen their understanding and in doing so helping them develop word sense to explore words on their own it's a win-win for everyone and it will make your students better readers writers and thinkers so in honor of William Van Cleave I'm going to tell you don't be an expert that's okay learn and grow with your students there's some really amazing resources to help you and your students to learn and grow I've got so many of them that I use all the time such as morphe magic um Susan he hein's book U beneath the surface of words um any of the resources by Mar marshia Henry are fantastic so dig deeper build your own knowledge while you're building that of your students and you don't have to be an expert to do it it's all about all of us growing together thanks so much for your time today friends
189395
https://www.youtube.com/watch?v=ADLoWIxKsyQ
Understanding Slope (Simplifying Math) Buffington 121000 subscribers 17863 likes Description 1394714 views Posted: 9 May 2017 Understand what slope is and how to calculate slope when given a graph! Easy as Pi! For fun brainteasers and IQ puzzles to build up your math brain: 5% off your order if you use the code: BUFFINGTON FREE shipping at $99 for Canada & USA My recommended Calculators: If you purchase using the links below it will help to support making future math videos. Enjoy! Scientific Calculator: Graphing Calculator: Transcript: Intro hello this is Mr Buffington and we are going to answer this question today what is slope we're going to hopefully do some understanding of slope mainly through practice practice practice there we go all right let's What is Slope talk about slope in very general terms slope is basically how steep a line is if you would say that it has a really steep slope or a high slope it would look kind of like this just a really steep line a shallow slope would look more like this with there line is not quite as steep um another way to look at slope or two other ways that you could classify slope is if it has a steep slope going downwards and by downwards I mean when you're moving from left to right starting here going that way you would go down and that would have a negative slope or you could have a shallow negative slope so you have a steep slope a steep negative slope shallow positive or negative slope and two other kinds of slope the last two is a flat slope which is basically a zero slope and that's a horizontal line or a vertical slope which is called an undefined slope so that's the basics of what slope looks like but we can't just say one is more Measuring Slope than another or one is steep and another is shallow so we have to have some way of measuring which slope is greater which slope is less so we can't just have the these two and go well haha my slope's bigger than yours right we it's not how it works so we have to give it a number so that we can measure it and that's what we're going to talk about today is giving slope an actual number and this is how you do it first of all you have to understand one thing and that is that the slope of a line is consistent it if it's a straight line it goes up at a consistent rate so you can pick any two points on this line to be able to figure out the slope I mean if you four points on this line you can pick any one or any two points and then you just ask yourself how much does it go up and how much does it go side to side once you figure that out you have the slope so let's take a look at this one I'm going to use the red and blue lines you can see right here to get from the red to the blue it goes up two and over one now to show you that it is consistent I'll show you using the yellow and green line as well it goes up two and over one that's the slope it's often called the rise over the run or how much it changes up and down over how much it changes side to side in this case 2 over one that's the slope that is the slope for the line that's it we can sometimes make it very complicated but it doesn't need to be you just need to go from one point on the line to another point on the line if you're given a graph that's how easy it is so it is now your turn I want you to pick any two points on the line I'll give you a couple of points that you can work with or you can choose to ignore those points and try and use other points that's fine um pick any two points and find out how much does it go up and put that over how much it goes side to side pause the video Try It Out hey which points did you pick did you pick red and blue or did you pick yellow and green if you picked blue and yellow it would be the same thing right it goes up one and over three up one one and over one two 3 so the rise is one and the run is three the slope of this line is 13 1 over three I hope you got that that's if you did fantastic if you didn't do it and you just kept watching the video that's just cheating all right now it's time Negative Slope for me to take a moment and teach something else if the line goes down when you're moving left to right we talked about this a little bit before we call this a negative slope it has a a negative slope but you'll still calculate it the same exact way so I'm going to put four points on here just like I did before and I'm going to ask the same exact questions how much does it go down how much does it go side to side and then that's how we'll calculate our slope so let's do it how much does it go down one between red and blue and one between yellow and green one and one now notice when I went down from red to blue I went down one and over I called that a negative one okay and that will give us our negative slope so the rise over the run is negative one over one just one quick extension on this is Different Points something that we can take a look at what if you picked different points what if I pick the red and green point I would calculate it the same exact way how much does it go down and how much does it go side to side so here it is it goes down five so again we call that a NE five and it goes over positive 55 so the rise over the Run would again be -5 over 5 and we would reduce that fraction down to lowest and find that the slope is the same -1 is the slope of this line you can simplify it just like you would simplify a fraction so we could call this slope1 instead of -1 over one or you can leave it as a fraction sometimes with slope it's helpful to just leave it as a fraction what you don't want to do is reduce it to a decimal okay you definitely want to leave it leave slope as a fraction that way you can always tell the rise over the run now it's your turn I want you to find the slope of this negative number again I'll give you some points or I could not um go ahead and calculate it you can use any of those points or any other ones and calculate the slope of this line pause the video see if you can do it all right there's the Four Points I'm going to use the red and blue again my rise is -1 my run is two from the yellow to the green it's -1 over 2 so my slope would be - one2 or -1 over 2 the rise or change up and down is the number in the top and the run is the change in the bottom and just to demonstrate as well just in case you chose the red and green or or red and yellow or blue and yellow blue and green doesn't matter if you were to have chosen those two numbers it would be a slope of -4 over 8 which also reduces down to 1/2 all right so it doesn't matter which line which points you pick because the slope is going to be constant but if you pick points that are far apart you might need to reduce the fractions into being lowest terms all right so quick recap Recap slope is the rise over the run to learn how to work with slope you need to practice practice and practice here are a couple more videos I thought you might like go ahead and click on those like subscribe and share it with your friends hope that lesson was helpful for you have a wonderful day
189396
https://www.youtube.com/watch?v=UUhfjkB7aag
Electric Dipole | Derivation of Electric Field Due to Dipole Physics and animation 94100 subscribers 441 likes Description 23719 views Posted: 4 Jun 2023 Learn how to derive the electric field created by an electric dipole in this comprehensive tutorial. Understand the concept of an electric dipole and follow along as we break down the mathematical steps to calculate the electric field due to the dipole. If you're looking to enhance your understanding of electric dipoles and electric fields, this video is perfect for you. Dive into the world of electrostatics with us! Join this channel to get access to perks: Website - Spelling Mistake in video - equitorial Correction - equatorial 12th Physics Animation Series-: LIKE 👍 SUBSCRIBE , AND PRESS THE BELL 🔔 ICON Unit -1. Electrostatics , chapter-01 - Electric Charges and Field Spelling Mistake in video - equitorial Correction - equatorial Unit -1. Electrostatics , chapter-01 - Electric Charges and Field 1. Electric Charge - - Hindi---- 2. Conductor and Insulator - Hindi----- 3. Electroscope - Hindi ---- 4. Property of electric charge - Hindi ----- 5. Coulomb's Law (Coulomb Force) - Hindi ----- 6. Vector Form of Coulomb Force - Hindi - 7. Electric Field - Hindi ----- 8. Charge Distribution - Hindi - 9. electric field line - Hindi - 10. electric dipole - Hindi - 11. Torque on an electric dipole - Hindi - 12. Dipole Derivation - 13- Electric Flux - 14 -Gauss law and gaussian surface - 15 -Gauss law application - Queries Solved-: Time stamp:- 00:00 - Intro dipole electric field 0:32 - Derivation of electric field in Axial position 6:21 - Derivation of electric field in equatorial position 10:31 - Direction and derivation of net field in a dipole 12:51 - derivation of electric field at any point around dipole Some other playlist-: Electromagnetic induciton Moving charge and magnetism-: Physics and animation All videos-: Other Channel -: Physics and animation in hindi - acgenerator #3danimation #electromagneticinduction #physics #conceptofphysics #physicsvideo #animation #educational #12thphysics #electricchargesandfields #electrostatics Social Media links Youtube channel- www.youtube.com/physicsandanimation Facebook page- Instagram - Website- www.physicsandanimation.blogspot.com Credits - : sound effect from:- www.soundbible.com Elektronomia - Sky High [NCS Release] Jim Yosef - Eclipse [NCS Release] 40 comments Transcript: Intro dipole electric field Hello friends welcome back to the 12th video of the 12th physics animation series of physics and animation today we will discuss the equation and derivation of the electric field created by a dipole so let's get started in the 10th video of this series we learned that a dipole generates a beautiful electric field around it which is not radially symmetric like that of a point charge therefore for a dipole the equation for electric field intensity should be different in today's video we will understand this concept Derivation of electric field in Axial position first let's understand the intensity of electric field along the axis of a dipole which is also known as the axial position as we have considered in previous videos a dipole consists of two equivalent opposite charges separated by a distance L from its Center so let's assume that we have placed a positive test charge at Point M on the axial position at distance X from the center of dipole o and we know that the electric field intensity created by a point charge at any point is equal to 1 upon 4 Pi Epsilon naught multiplied by The Source charge Q divided by the square of the distance R between the source charge and the point M for Simplicity let's consider the constant value 1 upon 4 Pi Epsilon naught as k therefore our equation simplifies to KQ by R square now if we talk about the electric field at a distance X from the dipole the magnitude of the electric field intensity due to the positive charge will be equal to KQ divided by the square of the distance between the positive charge and point m here the distance R can be expressed as x minus L similarly the magnitude of the electric field intensity due to the negative charge at Point M will be equal to KQ divided by the square of the distance between the negative charge and point m in this case the distance R can be written as X Plus l so now we have two equations the first equation represents the electric field intensity at Point M due to the positive charge which is equal to KQ divided by x minus L Square and the second equation represents the electric field intensity at Point M due to the negative charge which is equal to KQ divided by X Plus L Square as we know these two electric Fields cannot exist at the same point simultaneously therefore we need to calculate the net effect of the electric field at Point M let's analyze the situation at Point m if we discuss the electric field at Point M due to the positive charge we know that it will be directed radially outward because Point time is closer to positive charge resulting in a stronger electric field intensity compared to that caused by a negative charge we will represent this stronger intensity with a bigger Arrow similarly if we talk about the electric field at Point M due to a negative charge it will be directed readily inward and its effect will be weaker in comparison to the electric field generated by a positive charge this is because Point m is located at greater distance from the negative charge we will represent this weaker effect with a shorter Arrow now at the axial position the net electric field which we can also call E axial will be equal to electric field generated by the positive charge minus electric field generated by a negative charge we have already calculated both of these values thus the expression becomes k q by x minus L Square minus KQ by X Plus L Square since k q is common we can take it out simplifying further the denominator becomes x minus l a square multiplied by X Plus L Square while the numerator simplifies to X Plus L Square minus x minus L Square if we expand the numerator using the identity a plus b whole Square the first term will be x square plus L square plus 2 XL and the second term using the identity a minus b square will be x square plus L Square minus 2 XL and in the denominator square is common so we bring out the square and the denominator will become the square of the product of X Plus L and x minus l and we know that the product of X Plus L and x minus L can also be expressed as x square minus L Square therefore in the denominator we will finally have a square of x square minus L Square now it is important to understand that in dipole the distance between the two equal charges L is very small compared to the distance X at which we are calculating the electric field hence we can neglect the term L Square after neglecting it the denominator will left with X to the power 4 now if we expand the numerator by opening the brackets we will have an equation in the following form here x square and L Square will easily become 0 and we will be left with the term 2XL plus 2XL which simply add up to 4XL if we express 4 as 2 multiplied by 2 and multiply it with k q we will have 2ql and we know that 2ql represents the dipole moment whose direction is taken from the negative charge towards the positive charge of a dipole if we cancel out the X the final equation for the electric field at the axial position of the dipole will be 2 KP by X Cube we can represent it in Vector form as follows now if we discuss the direction of the electric field at the axial position e axial will align with the direction of the dipole moment this is because at Point M the electric field generated by the positive charge is more dominant resulting in the net effect e axial being outward or in other words can be said that in the same direction as that of dipole Derivation of electric field in equatorial position moment okay let's now proceed to the second case and determine the magnitude of the electric field at Point M which is located at a perpendicular distance X from the center of the dipole this position is also known as equatorial position in this scenario we observe that point m is equidistance from both the positive and negative charge it is positioned at the same distance R and angle Theta from each charge it is important to note that the magnitude of both charges in a dipole is equal with one being positive and the other negative knowing this we can say that due to the positive and negative charges at Point M the magnitude of electric field will be the same however due to the positive charge the electric field at Point M will be readily outward while due to the negative charge the electric field will be readily inward the lengths of both arrows will be the same because their magnitudes are the same now we can split the electric field e of the positive charge into vertical and horizontal components in this case since horizontal line is parallel to the dipole the horizontal component will form same angle Theta with the electric field e due to the positive charge we also know that the vertical component will be e sine Theta while the horizontal component can be represented as e cos Theta similarly the readily inward electric field of the negative charge will also have a vertical and horizontal components by using alternate interior angles we can deduce that the angle between the electric field e due to a negative charge and the horizontal component will also be Theta thus we have vertical component a sine Theta and horizontal component e cos Theta here we observe that horizontal components olaps and can be added resulting in 2E cos Theta however the vertical components are opposite to each other and will be canceled out so now that we have split both electric fields and canceled out the vertical components we are left with only 2E cos Theta this is our net electric field at the equatorial position which we can represent as e perpendicular now we can equate these two to e cos Theta where e is equal to k q by R square using the Pythagoras Theorem we know that in a right angle triangle r a square equals x square plus L Square let's substitute this value into our equation additionally we can express cos Theta as L by R and R as the square root of x square plus L Square after substituting these values we find that the denominator becomes x square plus L Square to the power 3 by 2. simplifying the equation further we get equatorial equal to 2 k q l by x square plus L Square to the power 3 by 2. in this equation we recognize that 2ql represents the dipole moment which we can denote as P since L square is much smaller compared to X we can neglect it after canceling out the power 2 in the denominator we are left with X cube in the denominator hence the final equation for Equatorial is k p by X Cube if we write it in Vector form it can be expressed as follows and when it comes to Direction in this case the direction of the net electric field is opposite to the direction of the dipole moment which we represent with a negative sign in Vector form therefore the final equation for Equatorial is equal to minus k p by X Cube Direction and derivation of net field in a dipole okay before we discuss the third case let's understand the direction of the net electric field at different position in a simple way the direction of net electric field can be determined simply by placing a positive test charge at different position in the axial position the positive test charge will experience a force in the direction of the dipole moment however if we place it in the equatorial position it will experience a force in the opposite direction of the dipole moment similarly if we place the positive test charge on the left side of the negative charge meaning in another axial position this time the positive test charge will experience a greater influence of the negative charge it will again feel a force in the direction of the dipole moment indicating that the direction of the net electric field aligns with the dipole moment from this analysis we can conclude that the direction of the net electric field at the axial position of an electric dipole is taken along the dipole moment furthermore if we consider Point M as the opposite of the previous equatorial position the positive charge of the dipole will push the positive test charge away with the readily outward electric field while the negative charge will pull it with the radially inward electric field once again the resultant electric field will be in the exact opposite direction of the dipole moment hence we can say that at the equatorial position the direction of the electric field is opposite to the dipole moment apart from all these calculations this fact can be understood solely through the electric field of the dipole we know that the direction of the electric field essentially indicates the direction in which a positive test charge would move if placed in that field so the electric field of the dipole shows that if a positive test charge is placed at the equatorial position the electric field of the dipole will cause it to move in the opposite direction of the dipole moment on the other hand if it is placed at the axial position it will move along the dipole moment which shows that direction of the net electric field will be along the dipole moment in the axial position derivation of electric field at any point around dipole let's now discuss the third and final case in which point m is neither in the axial position nor in the equatorial position in other words this time we need to determine the net electric field at a point in space here Point m is located at a distance X in a space where the line joining Point M and the center of dipole forms an angle Theta with the axis of the dipole at this time we need to split the dipole moment one component of the dipole moment will be along the line joining the center o of the dipole and point m which will be equal to P cos Theta the other component will be perpendicular to this line and will be equal to P sine Theta since we have created two new dipoles P cos Theta and P sine Theta by splitting the dipole moment we can now forget the previous dipole ok now we have two dipoles and the important thing to understand here is that for the dipole P cos Theta the point M lies on the axis of the P cos Theta which means it is in the axial position so at Point M the dipole P cos Theta will create an axial electric field e axial along the direction of P cos Theta on the other hand if we consider the dipole P sine Theta the same point M lies in a position perpendicular to the center of the P sine Theta meaning it is in the equatorial position therefore equatorial will be just opposite to the P sine Theta so now at our Point M we have two electric Fields the first one is a axial due to the dipole P cos Theta along the axis and the second one is equatorial for the dipole P sine Theta in the equatorial position we need to calculate the net effect of both electric fields at Point m so we had previously calculated the formula for E axial which had a magnitude of 2 k p by X Cube the important thing to understand here is that at this moment the electric field axial is not due to the entire dipole p but rather due to the component P cos Theta so we will replace p with P cos Theta and the electric field at Point M due to P cos Theta will be equal to 2 KP cos Theta by X Cube similarly we had also calculated the equation for Equatorial which was k p by X Cube however in this case the dipole moment p is not applicable at Point M but rather P sine Theta in the equatorial position hence we will replace p with P sine Theta therefore the electric field at Point M due to P sine Theta will be equal to k p sine Theta by X Cube now we know that the net electric field at Point M will be simply the vector sum of e axial and E equatorial which will be equal to square root of e axial square plus e equatorial Square now let's substitute the values of e axial and E perpendicular into the equation for E net the term KP by X Cube to the whole Square can be taken outside of the bracket leaving us with 2 cos Theta whole square plus sine Theta Square inside the bracket bringing the k p by X Cube to the whole Square outside of square root the equation becomes KP by X Cube into root of 2 cos Theta whole square plus sine Theta Square where square of 2 cos Theta will be 4 coz Square Theta simplifying further 4 cos Square Theta can be expanded as 3 cos Square theta plus cos Square Theta which simplifies to cos Square theta plus sine Square Theta which equals to 1 thus the final equation for the net electric field is k p by X Cube into root of 1 plus 3 coz Square Theta to determine the direction of the net electric field we need to calculate the angle Alpha between e net and E axial this angle can be found using the tangent of alpha which is equal to a perpendicular divided by E axial substituting the values of e xial and equatorial we can simplify the equation to sine Theta by 2 cos Theta which can be further simplified to 1 upon 2 tan Theta this equation gives us the value of angle Alpha however the question is at what angle does the net electric field lie with respect to the axis of the dipole to understand this let's assume a line parallel to the dipole axis passing through the point M now we can understand that the angle between e axial and the dipole axis is also Theta in other words the exact inclination of the net electric field with respect to the dipole axis is theta plus Alpha by determining the inclination of net electric field at any point with respect to the dipole moment we can find the direction in which the net electric field points please note that this is General case that allows us to find intensity of the electric field at any point around the dipole for example in the derivative formula for electric field intensity if we place Theta as 90 degrees then three cos Square Theta will become 0. as cos 90 equals to 0 we would obtain the same equation as the one derived for the equatorial position similarly if we apply the same formula to find the electric field in the axial position we simply Place Point M on the axial position and set theta equals to 0 degrees when we substitute 0 degrees into the formula cos 0 equals to 1 resulting in square root of 4 hence we obtain the same equation as the one derived in the beginning of the video for the electric field intensity in the axial position by remembering this general formula we can apply it to any other case as well apart from this we observed that the electric field intensity of a dipole is inversely proportional to the cube of the distance X between the center of the dipole and the point m in other words as we move further away from the dipole the electric field of the dipole decreases more rapidly compared to the electric field of a point charge this is because the electric field of a point charge is inversely proportional to the square of the distance so that's all for this video thank you so much for watching foreign
189397
https://www.youtube.com/watch?v=sXQ9eC73hac
How to Borrow in Subtraction - Regrouping when Subtracting Doodles and Digits | Educational Math Videos 21100 subscribers 194 likes Description 42842 views Posted: 22 May 2023 In this video, we'll explore the concept of borrowing in subtraction and regrouping. This is a crucial math skill for students in 3rd, 4th, and 5th grades as it helps them subtract larger numbers with ease and accuracy. Check out the link below for fraction houses! By the end of this video, you'll have a strong understanding of borrowing in subtraction and regrouping and be able to solve problems with confidence. Whether you're a student looking to improve your math skills or a teacher seeking to reinforce these concepts in your classroom, this video is a must-watch. Get the subtraction houses here: Find more at: Transcript: Sometimes when you're subtracting numbers, you actually need to borrow from the place value before it. Why is that? And how do we do that? Let's look at a problem right here and I'll show you what's going on. We're going to put the two numbers in our place value chart like we typically do and start in the ones place. Two minus five. Wait a minute. I don't have five to take away from two. So what am I going to do? I actually need to borrow from the tens place. So in the tens place is a four and we're going to borrow one group of ten so that four will turn into a three. We're going to take that group of ten and actually add it to the ones place. So instead of two, it's going to be 12. That looks better, 12 minus five. I can do that. 12 minus five equals seven. Then we'll go to the 10th place, three minus two equals one. And finally eight minus three is five. So our answer is 517. Let's try another one. 5103 -2046. We're going to first put this number in a place value chart. I want you to notice something, though. What do you think might happen when we borrow? We're going to have to do something a little different on this problem. All right. We're going to start in the one place, three minus six. We can't subtract six from three, so we're going to have to borrow. So just like the problem before, we're going to go to the tens place. Wait a minute. There's a zero in the tens place. So we're going to actually have to borrow from the hundreds place. Watch what I'm doing. I'm actually going to have to borrow from the hundreds place carry to the tens place and then carry one more time. All right. We're still not finished because our ultimate goal is we need to borrow to get to the one place. So we have to borrow one from the ten. So that makes it a nine. And we'll add that ten into the ones place. So ten plus three equals 13. Whew! Now we can subtract. 13 minus six is seven. Nine minus four is five, zero -0 is zero and five minus two is three. Again, sometimes you actually have to borrow from a few place values in order to get the numbers to subtract something. You might have to practice a few times to really get the hang of. Is there a way that I can check my thinking when I'm finished? There's actually a great way to check your thinking when using the standard algorithm subtraction. You can take the difference and add it to the subtrahend and it should equal the top number. So for example, two plus one equals three. Three plus two equals five. Five plus one equals six. So I know that my difference is correct. Wait a minute. What were all those fancy terms you just used? Quick review. There's three main parts to a subtraction problem. A subtrahend, the difference, and the minuend. Look at these right here. The minuend will be the top number. The subtrahend is what you will be subtracting and the difference is the answer. Now that we've done a couple together, let's see if you can do a couple subtraction problems on your own. 924 -517 407. 29,126 -13,421. Remember, you can pause this if you need more time. 15,705 . How’d do you do? Did you get them correct? Do you need other videos like this? How to add or how to multiply? Or maybe more about fractions? Make sure you check our other videos on our channel. Thanks for joining in and I hope to see you soon. Bye.
189398
https://stephenlnelson.com/articles/time-value-of-money/
EasyRefresher: Applying Time Value of Money Concepts • Stephen L. Nelson Looking for the CPA firm? Check out our new website Skip to primary navigation Skip to main content Skip to primary sidebar Stephen L. Nelson Author. Accountant. Aspiring Apiarist. Search this website Menu Articles e-BooksMenu Maximizing Section 199A Deduction Setting a a Reasonable S Corporation Salary Small Business Tax Deduction Secrets Real Estate Tax Loopholes & Secrets DIY LLC Formation and Incorporation Kits Sample LLC Operating Agreements Sample Corporation By-Laws Contact EasyRefresher: Applying Time Value of Money Concepts May 18, 2015 By Stephen L. NelsonLeave a Comment The phrase “time value of money” must surely be one of the most used terms that people don’t really understand. Almost invariably, people who don’t know a discount rate from Adam use the term to explain or question any financial complexity. The irony—and most business school students and graduates know this—is that time value of money isn’t all that complicated. Or at least it isn’t at the conceptual level. Analyzing Borrowing The time value of money concept, which applies to loans, means that you need to include interest costs in any analysis of loans. In other words, to compare “loan A” to “loan B,” you need to account for the interest costs of each. Note that this isn’t the same thing as saying you need to compare the interest rates, however. The interest rate of a loan is important. It’s the first of the three variables used to calculate the interest charges of a loan. But you need more than just the interest rate to know what, for example, “loan A” costs. You also need to know the loan balance against which the interest rate is applied. (This is the second variable.) And you need to know for how many periods—years, months, or whatever— this calculation is made. (This is the third variable.) Interestingly, truth-in-lending laws make it easy for consumers to make time value of money comparisons of different borrowing options. By comparing the annual percentage rate, or APR, of one loan with another, one can generally tell which borrowing option is cheapest. The APR wraps all of the costs of borrowing—all the time value of money—into a single, interest-rate-like number. By choosing the lowest APR, a consumer generally gets the cheapest loan. Unfortunately, it isn’t as easy for business borrowers to get APR information. (Truth-in-lending laws, for example, apply to consumers but not to business borrowers.) Nevertheless, it’s still generally most useful to make time value of money comparisons of different borrowing options by applying the APR concept. In the discussion of the RATE function, I’ll describe how to calculate the APR of any loan. Analyzing Investments The time value of money concept also applies to investments, with a slight twist: When applied to investments, you need to factor in the interest or investment earnings generated by an investment. In other words, to compare “investment A” to “investment B,” you need to account for the interest or investment returns of each. Note again that isn’t the same thing as saying you need to compare the interest rates or investment rates of returns, however. The interest rate or rate of return on an investment is important. As with borrowing comparisons, it’s only the first of the three basic variables used to calculate the investment profits from an investment. You also need to know the investment balance against which the interest rate or rate of return is applied. (This is the second variable.) And you need to know for how many periods—years, months, or whatever—this calculation is made. (This is the third variable.) As a general rule, when people perform time value of money comparisons on investments, they compare either the present values of the two investments or the rates of return of the two investments. The rate comparison method is easier to understand because it works very much like the APRs that lenders provide consumers. To compare “investment A” with “investment B” on the basis of interest rates or investment rates of return, you compare two percentages; whichever is larger is better, so the logic goes. Comparing investments on the basis of their rates of return, however, creates a handful of problems, as you may already know. First, simple rate comparisons ignore the fact that the investment balance is important. For example, earning a 25% return on a billion dollars is far more profitable than earning a 100% return on a million dollars. Second, simple rate comparisons ignore the rate at which intermediate cash flows are reinvested. If you have 1 million dollars to invest for 10 years and are choosing between one investment which pays 25% for one year and another investment which pays 15% for 10 years, for example, you can’t know which is better unless you know the rate at which your money can be reinvested in year two. Third, return-based investment measures sometimes suffer from a sort of mathematical phenomenon in which the return formula can’t be solved with a single, unique interest rate or investment rate of return value. It turns out, as mentioned elsewhere in this book, that an investment with “N” cash flows (“N” is number of cash flows) is actually an Nth root polynomial equation with up to “N” real and imaginary solutions to the investment’s internal rate of return equation. Because of the aforementioned problems with applying the time value of money to investment calculations, business analysts commonly compare investments based on their present values. Two investments’ cash flows can be evaluated on an “apples-to-apples” basis by comparing their present values: whichever investment provides the greater present value is better. You can also compare the present value of an investment’s cash flows to its initial cash cost, making what’s known as a net present value calculation. A net present value is actually a simple cost benefit comparison. You compare the cost of an investment, meaning its cash price, with its benefits, calculated as the present value of its future cash flows. If the net present value is positive, it means the benefits exceed the cost. If the net present value is negative, it means the cost exceeds the benefit. A challenging feature of present value and net present value calculations concerns the choice of a discount rate, or interest rate, used to convert future cash flows to their current-day, present value. For example, people argue in favor of using the cost of the capital used to fund an investment. If an investment is made using borrowed money that costs 8%, for example, someone taking this approach might use 8% as the discount rate. Another commonly argued approach is to use the rate of return offered by similarly risky investments. If you can make an investment that forces you to bear the same risk and pays a 12% return, some people argue that you should use a 12% discount. In practice, it’s worth mentioning, that one often sees discount rates set almost as a matter of management policy or as an arbitrary decision made by a key participant in the financial analysis process. In a case like this, top management might say, perhaps explicitly, that investments must produce at least a 15% return and that would mean that present value calculations should be made with a 15% discount rate. Dealing with Inflation One important issue you want to consider in making time value of money calculations is the effect of inflation. Over time, of course, inflation erodes the value of the currency units used to make time value of money calculations. This erosion makes it difficult to compare currency values of different time periods. For example, 1 million dollars today isn’t the same thing as 1 million dollars 20 years from now. You can, fortunately, rather easily estimate the effect of inflation in your time value of money calculations. To do so, you simply need to use the real rate of return rather than the nominal rate of return in your calculations. You calculate the real rate of return by subtracting the inflation rate from the nominal rate of return. As an example of how all this works, suppose that you want to estimate the future value of a long-term investment in a stock index fund. Over long periods of time, the stock market returns about 10% and inflation runs about 3.5%, so the real return equals 6.5%. If you used 6.5% in your time value of money calculations—rather than the nominal rate of return of 10%—the future value amounts you’d calculate wouldn’t include inflation. In effect, by subtracting the inflation rate from the nominal return, you subtract the effects of the inflation from the compounded, future value amounts you calculate. Share Share on Google Plus Share Share on Facebook Share Share this Filed Under: Finance, Using ExcelTagged With: financial calculations About Stephen L. Nelson Stephen L. Nelson is the author of more than two dozen best-selling books, including Quicken for Dummies and QuickBooks for Dummies. Nelson is a certified public accountant and a member of both the Washington Society of CPAs and the American Institute of CPAs. He holds a Bachelor of Science in Accounting, Magna Cum Laude, from Central Washington University and a Masters in Business Administration in Finance from the University of Washington (where, curiously, he was the youngest ever person to graduate from the program). Reader Interactions Leave a Reply Cancel reply Your email address will not be published.Required fields are marked Comment Name Email Website Δ This site uses Akismet to reduce spam. Learn how your comment data is processed. Primary Sidebar Article Categories Accounting Business Planning Finance Real Estate Statistics Taxes Using Excel Copyright ©2025 Stephen L. Nelson, Inc. · Contact · Steve’s Bio · Publications · Glossary
189399
https://teachy.ai/en/summaries/high-school/10th-grade/mathematics-en/exponential-function-graph-or-traditional-summary-7f87a
Summary of Exponential Function: Graph | Traditional Summary We use cookies Teachy uses cookies to enhance your browsing experience, analyze site traffic, and improve the overall performance of our website. You can manage your preferences or accept all cookies. Manage preferences Accept all TeachersSchoolsStudents Teaching Materials EN Log In Teachy> Summaries> Mathematics> 10th grade> Exponential Function: Graph Summary of Exponential Function: Graph Lara from Teachy Subject Mathematics Mathematics Source Teachy Original Teachy Original Topic Exponential Function: Graph Exponential Function: Graph Exponential Function: Graph | Traditional Summary Contextualization Exponential functions are a special class of mathematical functions where the independent variable appears in the exponent. They are fundamental for describing phenomena of rapid growth and decay and are widely used in various fields of knowledge, such as biology, physics, and finance. For example, in biology, the growth of a bacterial population under ideal conditions can be modeled by an exponential function, where the population doubles every fixed time interval, resulting in extremely rapid growth. Additionally, exponential functions are crucial in finance, particularly in the calculation of compound interest. When investing money, the interest accumulated on the principal over time can be described by an exponential function, allowing for the prediction of investment growth. Understanding the characteristics and behavior of exponential functions is therefore essential for modeling and interpreting many real-world phenomena, making their study indispensable in the field of mathematics. Definition of Exponential Function An exponential function is a mathematical function of the form f(x) = a^x, where 'a' is a positive constant different from 1 and 'x' is the exponent. The independent variable, 'x', appears in the exponent, which characterizes the exponential behavior of the function. This definition is fundamental to understanding how these functions model phenomena of rapid growth and decay. Exponential functions are used to describe processes where the rate of growth or decay is proportional to the current value of the function. This means that as 'x' increases, the function grows or decays at a rate that also increases or decreases exponentially. This behavior is observed in various areas, such as biology, physics, economics, and finance. For example, an exponential function can model the growth of a bacterial population, where the population doubles every fixed time interval. Similarly, in finance, compound interest is calculated using exponential functions, allowing for the prediction of investment growth over time. Understanding the definition and properties of exponential functions is essential for applying these concepts in practical situations. General form: f(x) = a^x, where 'a' is a positive constant different from 1. Independent variable 'x' appears in the exponent. Models phenomena of rapid growth and decay. Exponential Growth and Decay Exponential growth occurs when the base 'a' of the exponential function is greater than 1. In this case, as 'x' increases, the value of the function f(x) = a^x grows rapidly, resulting in accelerated growth. For example, if the base is 2, the function doubles with each unit increase in 'x'. This type of growth is observed in biological populations, where the number of individuals can increase exponentially under ideal conditions. On the other hand, exponential decay occurs when the base 'a' is between 0 and 1. In this scenario, as 'x' increases, the value of the function f(x) = a^x decreases rapidly, approaching the x-axis without ever touching it. A common example of exponential decay is radioactive decay, where the amount of a radioactive substance decreases exponentially over time. Both types of exponential behavior are essential for modeling and understanding various natural and artificial phenomena. Exponential growth is often observed in rapid multiplication processes, while exponential decay is characteristic of fast diminishing processes. Exponential growth: base 'a' greater than 1. Exponential decay: base 'a' between 0 and 1. Models phenomena of rapid growth and rapid decay. Graph of the Exponential Function The graph of an exponential function y = a^x is a curve that passes through the point (0,1), regardless of the value of the base 'a'. This point is common to all exponential functions because any number raised to the power of zero equals 1. For bases greater than 1, the graph grows rapidly as 'x' increases, while for bases between 0 and 1, the graph decreases rapidly. The behavior of the graph depends on the base 'a'. When 'a' is greater than 1, the graph extends upwards and to the right, reflecting exponential growth. When 'a' is between 0 and 1, the graph approaches the x-axis as 'x' increases, reflecting exponential decay. In both cases, as 'x' becomes negative, the graph approaches the x-axis but never touches it, showing that the function never reaches zero. Drawing the graph of an exponential function requires identifying key points, such as (0,1) and other points obtained by substituting specific values for 'x'. Understanding the graph helps visualize the behavior of the function in different scenarios and is an essential tool for interpreting phenomena modeled by these functions. Graph passes through the point (0,1). Rapid growth for bases greater than 1. Rapid decay for bases between 0 and 1. Graph Transformations The transformations of the graph of an exponential function involve horizontal and vertical shifts that alter the position and shape of the original graph. The function y = a^(x-h) + k represents a transformation of the basic function y = a^x, where 'h' and 'k' are constants that determine the shifts. The term (x-h) in the function y = a^(x-h) + k represents a horizontal shift. If 'h' is positive, the graph shifts to the right; if 'h' is negative, the graph shifts to the left. This shift does not change the shape of the graph but alters its position along the x-axis. For example, the function y = 2^(x-2) is a shift of 2 units to the right from the function y = 2^x. The term '+k' in the function y = a^(x-h) + k represents a vertical shift. If 'k' is positive, the graph shifts upwards; if 'k' is negative, the graph shifts downwards. This shift also does not change the shape of the graph but alters its position along the y-axis. For example, the function y = 2^x + 3 is a shift of 3 units upward from the function y = 2^x. Horizontal shift: y = a^(x-h). Vertical shift: y = a^x + k. Transformations alter the position but not the shape of the graph. To Remember Exponential Function: A function of the form f(x) = a^x where 'a' is a positive constant different from 1. Exponential Growth: Occurs when the base 'a' is greater than 1, resulting in rapid increase. Exponential Decay: Occurs when the base 'a' is between 0 and 1, resulting in rapid decrease. Graph Transformations: Changes in the graph's position through horizontal and vertical shifts. Compound Interest: Growth of an investment over time modeled by an exponential function. Conclusion In this lesson, we explored the definition and properties of exponential functions, understanding how they model phenomena of rapid growth and decay. We discussed the behavior of exponential functions for different bases, highlighting the accelerated growth when the base is greater than 1 and the rapid decay when the base is between 0 and 1. We also learned to draw and interpret the graphs of these functions, identifying key points and understanding the horizontal and vertical transformations that affect the graphs' positions. Knowledge about exponential functions is essential for various fields of knowledge, such as biology, physics, and finance. Through practical examples, such as population growth and compound interest, it became clear how these functions are applied in real situations. Furthermore, the ability to draw and interpret the graphs of exponential functions is fundamental for analysis and data modeling in diverse contexts. Understanding exponential functions enables students to solve complex problems and make informed decisions in their daily lives and future careers. Therefore, continuous exploration of this topic is crucial for the development of advanced mathematical skills and the practical application of this knowledge in real-world situations. Study Tips Review the practical examples discussed in class and try to create new examples based on real situations you know. Practice drawing graphs of different exponential functions, varying the bases, and applying horizontal and vertical transformations. Use additional resources, such as educational videos and online exercises, to reinforce your understanding of the behavior and applications of exponential functions. Want access to more summaries? On the Teachy platform, you can find a variety of resources on this topic to make your lesson more engaging! Games, slides, activities, videos, and much more! Explore free resources People who viewed this summary also liked... Summary Circles and Emotions: Exploring Geometry with Heart and Mind! 🌟📏❤️ Lara from Teachy - Summary Function: Codomain and Range | Active Summary Lara from Teachy - Summary Summary of Time Intervals Lara from Teachy - Summary Statistics: Averages | Traditional Summary Lara from Teachy - Join a community of teachers directly on WhatsApp Connect with other teachers, receive and share materials, tips, training, and much more! Join the community We reinvent teachers' lives with artificial intelligence Audiences TeachersStudentsSchools Materials ToolsSlidesQuestion BankLesson plansLessonsActivitiesSummariesBooks Resources FAQ 2025 - All rights reserved Terms of Use | Privacy Notice | Cookies Notice | Change Cookie Preferences